RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Samsung and Apple: What is Really Going On?

Samsung and Apple: What is Really Going On?
by Paul McLellan on 02-01-2015 at 9:00 am

Apple reported that it sold $74.6B in products last quarter, and earned an all-time record (for any company) $18.06B in profits. Sammy reported its lowest quarterly profit since 2011 at $21.3Bk, down almost a third.

In 2013 mobile had been 70% of Sammy’s profits so any drop in revenue or profitability would have an amplified effect, as it has done. They just announced that profits in mobile had dropped 64%. Luckily the semiconductor divisions profits were up 35% for the quarter partially compensating. It is good to be diversified sometimes.

According to one analyst, Strategy Analytics, Samsung and Apple sold the same number of smartphones last quarter at 74.5M. This is amazing since even with their current profitability issues, Samsung had twice the market share of Apple (in units). Of course this is a special quarter. With Apple’s once-a-year product releases they sell a huge number of phones in the first quarter after announcement (36,000 an hour all quarter, as Tim Cook pointed out on the conference call) and then it gradually tapers off. Nobody wants to be the last person to buy the old model just before they announce a new one so I expect Samsung will be ahead again this quarter. Another analyst, IDC, have Samsung a little ahead by half a million phones or so. Since Samsung don’t break out their numbers like Apple do, the Apple number is much more accurate than the Samsung one. But the really important number is that Apple makes nearly 14 times as much profit as Samsung on those roughly equal numbers of phones.

One theory as to why Apple sold more than expected and Samsung less is that people really like large screens. Really like them. And if Apple didn’t have one they had to buy Samsung. The moment Apple got one, then there was a lot less reason to buy Samsung. I have heard that Apple are selling more of the large screen iPhone6 than they expected. Women in particular love them (since they only have to fit in a purse) and men not so much since they have to fit in a pocket.

Apple sold a lot more iPhone6s than expected and that pretty much as to come out of the high end Galaxy business which is presumably the highest profit part of the market too. Xiaomi’s sales actually fell last quarter but they have still pretty much come out of nowhere and are eating into the low end of everyone’s business and Huawei, Lenovo/Motorola, and the other Chinese all compete at the price sensitive end.

Going forward isn’t looking any better for Samsung. On the earnings call they said:[they expect] the business environment in 2015 to be as challenging as in 2014

Apple, of course, doesn’t sell purely on price/features. I just read somewhere, can’t find it now, that Apple just took over from Louis Vuitton or someone like that as the most aspirational brand in China.

So prediction. Samsung will gain market share and Apple will retreat this quarter, although with Chinese New Year in a few weeks Apple will be strong for the first half of the quarter. Who wouldn’t want to find an iPhone6 in their red envelope.

Another thing to watch: will Xiaomi do a deal with Facebook in the US as rumored. It would give them instant credibility since nobody apart from the sort of people who read Semiwiki have ever heard of them over here and the original “Facebook phone” with HTC flopped. Every carrier would have to immediately support Xiaomi or risk losing a lot of business to their competitors. Never forget (as apparently Elop never realized at Nokia) that selling phones is all about carrier support. Microsoft discovered it too when the released the Kin without lining up carrier support and discontinued it 6 weeks later when it never got any.

And yet another thing: Android has 80% market share or so. But almost the only vendor making any profit was Samsung. Android will still have big market share but margins for everyone are razor thin while Apple runs away with all the money. It wouldn’t surprise me if Apple is making 2/3 or more of the profit for the entire smartphone market.


CEVA and LTE: Happy Together

CEVA and LTE: Happy Together
by Majeed Ahmad on 01-31-2015 at 11:00 pm

Long Term Evolution (LTE)-based 4G technology is reshaping the wireless infrastructure landscape, and that brings a new set of opportunities for IP core licensor CEVA Inc. and its DSP offerings for multi-mode LTE base stations.

LTE devices—both handsets and radio base stations—are haunted by power constraints, mainly due to the requirement of complying to multiple network technologies: GSM/GPRS, EDGE, W-CDMA, TD-SCDMA, HSPA+, FDD and TDD modes of LTE and LTE-Advanced, and Wi-Fi. That puts a significantly larger burden on radio base stations that are bound to adapt to changing traffic patterns across the 4G network.

Moreover, LTE-centric 4G infrastructure is gradually shifting from macro-cells for wide open spaces and metro-cells for high population areas toward heterogeneous network architecture or HetNet—a multilayer system of overlapping big and small cells that pump out cheap bandwidth. However, while HetNet turns big-tower cellular into dense, multilayer high-capacity network, it also demands greater adaptability and flexibility within base stations to carry out bandwidth engineering effectively.

As a result of these shifts in base station market, infrastructure vendors are moving away from off-the-shelf chips supplied by ASSP vendors like Freescale and TI. And they are moving toward system-on-chips (SoCs) for multi-mode LTE base stations. Here, CEVA’s XC family of DSP cores promises to overcome power consumption, time-to-market and cost challenges regarding adoption of SoCs for multi-mode LTE base stations.


CEVA-XC supports multiple wireless standards in software

The IP platform licensor has been positioning its CEVA-XC4500 family of special-purpose DSPs especially for LTE infrastructure needs through software-based modems that can serve intense demands related to multi-mode wireless baseband, smart wireless backhaul and Wi-Fi offloading. The CEVA-XC4500 builds on the strengths of the CEVA-XC4000 family and optimizes structure, performance, and low power necessary for wireless OEMs deploying new base stations into the HetNet.

The fourth generation of the CEVA-XC architecture, CEVA-XC4500 DSP cores offer powerful fixed point and floating point vector capabilities, supplying the performance and flexibility demanded by LTE wireless infrastructure applications. They allow multi-mode LTE base station SoCs to adapt to different types of traffic and varying loads through multicore arrangement built on DSP clusters. Each core can handle multiple queues to avoid any network stalls or deadlocks.

The XC4500 DSP core can also perform digital front-end tasks, providing pre-distortion, sampling filters, up- and down-conversion, and other radio management functions. Moreover, it can support wireless backhaul with up to 4096 QAM, OFDM or single-carrier support, wideband spectrum, and support of both TDD and FDD.


The CEVA-XC DSP core evolution

LTE Base Station SoCs

The Chinese wireless infrastructure vendor ZTE Corp. has licensed CEVA-XC DSP core to design FDD/TDD multi-mode SoCs for LTE base stations. ZTE, like other wireless infrastructure OEMs, clearly sees mobile networks heading toward small cells and is readying small cell base stations by using SoC platforms built around Ceva’s DSP cores.

In July 2014, CEVA beefed up its Wi-Fi capabilities for 4G networks by acquiring RivieraWaves, a privately-held Bluetooth and Wi-Fi connectivity IP vendor based in Sophia-Antipolis, France. RivieraWaves has brought software-based Wi-Fi algorithms to CEVA’s LTE processing portfolio, a much-needed product in the context of small cells and access points within LTE networks.

The DSP technology has expanded to almost every tenet of communication systems. At the same time, however, general-purpose DSPs are generally falling short in the next-generation networking applications like LTE and LTE-Advanced. For instance, take Multiple Input and Multiple Output (MIMO) system, one of the leading tools for improving data rates in LTE networks. The MIMO technology increases spectral efficiency of the channel and improves the data rates for channel bandwidth by using multiple receive and transmit antennas.

MIMO, which creates multiple network streams, is a fundamental element in the LTE system and presents a classical case study for the implementation of special-purpose DSPs such as CEVA-XC. For optimizing MIMO in LTE networks, CEVA is proposing Maximum Likelihood Detector (MLD) technique that is a non-linear MIMO receiver implementation and is fundamentally based on an exhaustive constellation search.


MIMO is a fundamental element in the LTE system design

The above example shows that the complexity of LTE components like MIMO will make it imperative for LTE chip suppliers to augment advanced DSP cores. And, that LTE technology marks an important turning point in the evolution of SoC signal processing. That bodes well for CEVA’s product roadmaps and its efforts to take its wireless baseband DSP horsepower inside LTE and LTE-Advanced infrastructure chips.

The LTE build-out could pick greater momentum during 2015 and so could CEVA’s DSP core shipments for the LTE market. “This could be a good year for Ceva,” said Will Strauss, President & Principal Analyst, Forward Concepts, in a recent company newsletter.

Image credit: CEVA Inc.

Majeed Ahmad is author of books Age of Mobile Data: The Wireless Journey To All Data 4G Networksand Essential 4G Guide: Learn 4G Wireless In One Day.


Intel to Launch 10nm Chips in Early 2017?

Intel to Launch 10nm Chips in Early 2017?
by Daniel Nenni on 01-31-2015 at 7:00 am

As I have mentioned before, Intel and the foundries approach process development from different starting points. Intel is committed to Moore’s law in reducing the transistor cost by increasing the process density in a near linear fashion. The foundries on the other hand work closely with partners and customers to determine the power, performance, and area (PPA) goals of the next process node within a specific time to market (TTM). As we all know, Apple has a very specific TTM (iTTM) which will always be the priority.

14/16nm SoCs are already in production at Intel, Samsung, GlobalFoundries, and TSMC with products due out in the second half of 2015. This will be the first time we really get an Apple-to-Apple, IDM vs Foundry comparison with the Intel Cherry Trail and Apple A9 SoCs and I’m truly excited to see the first tear down. Considering the Apple A8 had 2B+ transistors on a 89mm2 and 8.47 X 10.5mm die, one can only imagine how many transistors the 14nm SoCs will have.

Now that 14/16nm is in production we are looking to 10nm for our next cost reduction. I really am glad we are all calling it 10nm but as you know not all 10nm processes are created equal (Who Will Lead at 10nm?). The 10nm process design kits (PDKs) are just now hitting the streets so the design challenges have just begun. The foundries are targeting the end of 2015 for the first customer tape outs which generally means production one year later. My guess is that you will see products with 10nm silicon in the second half of 2017 which means we will again be on 14/16nm for 2016. Improved versions of course, maybe 16nm FF++++ or 14nm UUULP?

An Intel Executive recently predicted 10nm would be available in 2017 in a candid interview on GulfNews.com out of Dubai of all places:

“We have been consistently pursuing Moore’s Law and this has been the core of our innovation for the last 40 years. The 10nm chips are expected to be launched early 2017,” said Taha Khalifa, general manager for Intel in the Middle East and North Africa region.

Mr. Khalifa is a 24 year Intel veteran so he should certainly know. Intel has a famous tick-tock model where they follow every architecture change with a die shrink. A tick is a die shrink and a tock is a new architecture. Broadwell was a 14nm tick, Skylake will be a 14nm tock, and Cannonlake will be a 10nm tick.

Back in the day, we used to judge microprocessors by the clock speed (megahertz), it was a badge of honor really. I remember buying a PC with a 40MHZ AMD CPU for more money than one with an Intel 33MHZ CPU. I even shamed my brother who had just bought a 33MHZ version. Computers were really like muscle cars for nerds back then. Recently an SOC friend of mine shamed me for commenting that the A8 ONLY ran at 1.4GHZ versus 2GHZ. What can I say, old habits die hard. With SoCs, the badge of honor is getting the best SYSTEM LEVEL performance, which now, thankfully, includes battery life.


What’s New with Static Timing Analysis

What’s New with Static Timing Analysis
by Daniel Payne on 01-30-2015 at 7:00 am

When I hear the phrase Static Timing Analysis (STA) the first EDA tool that comes to mind is PrimeTimefrom Synopsys, and this type of tool is essential to reaching timing closure for digital designs by identifying paths that are limiting chip performance. Sunil Walia, PrimeTime ADV marketing lead spoke with me by phone on Thursday to provide an update. The base STA tool from Synopsys is called PrimeTime SI and it provides:

  • Timing delay and noise analysis
  • ECO (Engineering Change Orders) guidance
  • Hierarchical analysis

An upgrade to PrimeTime SI is called PrimeTime ADV and it adds features like:

  • Advanced ECO
  • Parametric On-Chip Variation (POCV)

We first started hearing about PrimeTime ADV last year, and since the product introduction there are about 90 customers with 75 tape outs using this, so adoption is growing.

Related – Is Number of Signoff Corners an Issue?

ECOs

ECOs were historically implemented with a manual or scripted approach, however in the lat Last 5-6 years smaller process nodes have meant many more IP blocks, smaller routing channels and even tighter spaces to make changes in. Automating ECOs is more important now, not just for meeting timing, but also for power because of the interdependencies. About 3-4 years ago new patented ECO technology was developed within Synopsys to meet these new challenges.

One STA approach for ECO guidance looks at each end point in a timing graph, finds timing violations in the path, and then sizes the cells or inserts buffers to meet timing. This does work automatically, but it increases die size area and is not too scalable. With PrimeTime ADV the approach to ECO guidance is to collapse all endpoints into a global graph, and find an optimal location to fix the most violations – not just one violation. Fixes should not break anything. The composite graph takes into account all scenarios: more scalable, memory efficient, faster run times.

Another improvement with PrimeTime ADV is that physical dimensions are added to constraints now, so it knows the P&R congestion – what is open for buffer insertion and blockages. PrimeTime ADV reads in LEF and DEF physical data, then tells the Place & Route tool – IC Compiler where to make the changes with a Minimum Physical Impact (MPI):

You can still use PrimeTime ADV with other P&R systems: Cadence, Mentor, Atoptech. The IC Compiler MPI is unique to the Synopsys methodology.

Related – Enabling 14 nm FinFET Design

On-Route Buffering

Another new feature is called on-route buffering where it is adding buffers along the route, knowing where there are physical openings, spread out along the net, even estimating the interconnect parasitics to make the split net optimal.

In the mobile market designers value power reduction techniques. Because PrimeTime ADV has knowledge of sign-off timing, it can find positive slack and then optimize to further reduce leakage power by 10-15% through swapping cells with a higher Vt value, and this approach is quite easy to use.

Another power-reduction technique is down-sizing of cells, helpful to reduce dynamic power. PrimeTime ADV can do both down-sizing and Vt swapping together.

Hierarchy

Hierarchical ECO guidance is a technique that supports multiply-instantiated modules (MIM), so consider an SoC with 5 identical cores. During timing analysis the graph is flattened to get precise timing, and with MIM it can satisfy all of the timing requirements for each of the repeated cores in a single analysis run.

Variation Technology

On-chip Variation (OCV) for timing margins and analysis has migrated to Advanced OCV (AOCV) and now Parametric OCV (POCV) as process nodes have moved to 20nm and smaller.

The POCV approach uses a statistical number called sigma, specific to each cell in the library. PrimeTime ADV propagates the sigma value along the graph, and is the least pessimistic which means fewer violations in timing and a faster time to closure. Synopsys contributed changes to the Liberty Variance Format (LVF) and was approved as part of an IEEE standards process, where there are about 20 members.

Learning

For users of PrimeTime SI there is a quick learning curve when adding on PrimeTime ADV. Contact your local AE, or try the tutorial, demo, apnote or online examples. Another good place to learn more is at a PrimeTime SIG event, and the next one is at DAC. Webinars are helpful, and you can view these in both English and Mandarin.


STMicroelectronics and SoCs

STMicroelectronics and SoCs
by Majeed Ahmad on 01-29-2015 at 7:00 pm

What does system on chip (SoC) actually mean? How this tech moniker came into being? There is quite a bit of enigma about SoC in the technology press and what this term really stands for. Roger Shepherd, consultant at Parallel Computer Systems, shares on Quora his version of the SoC story. He says that he first heard about SoCs when SGS-Thomson unveiled STi5500 Omega “One-chip Multimedia EnGine Architecture” at the Western Cable Show in December 1996.

The single-chip solution integrated an MPEG-2 video/audio decoder, 32-bit processor, transport demultiplexer, Macrovision PAL/NTSC encoder, and video DAC. In fact, the STi5500 Omega device was an integration of two previous chips: the ST20-TP2 transport demultiplexer and the 3520 MPEG-2 decoder.

According to International Directory of Company Histories, one of the first major breakthroughs at SGS-Thomson—which was created through the merger of French semiconductor operation Thomson-CSF and Italian chipmaker SGS Microelettronica in 1987—came in 1989 when it produced a new chip for Nokia handsets. SGS-Thomson combined power supply and power management features on a single chip, enabling Nokia to achieve standby battery life cycle of more than 60 hours. Eventually, Nokia became a major SGS-Thomson customer.

ASIC vendors like SGS-Thomson had started to address SoC opportunities during the 1990s by embedding microcontrollers and DSPs into system-level chips that subsequently enabled handheld games, speech processing, data communications, and PC peripheral products.


Pasquale Pistorio: SoC marks natural evolution of semiconductor industry
(Photo courtesy of Pistorio Foundation)

Eventually, with the STi5500 chip, SGS-Thomson’s risky bet on MPEG decompression technology paid handsomely when the set-top box market took off in the mid-1990s. Back in 1994, set-top boxes were simple channel hopping devices for satellite and cable TV services. The MPEG revolution transformed set-top box into an interactive programming device capable of handling applications like sports events and pay TV.

The set-top box transformation—spanning from 1994 to 1998—put SGS-Thomson in a leadership position in MPEG decoders, a key building block of digital set-top boxes. The European chip giant first supplied MPEG-2 decoder chips for Hughes Electronics’ DirecTV set-top box, and by 2000, it had captured nearly 62 percent of the market.


The STi5500 multimedia decoder chip
(Image: STMicroelectronics)

In 1998, amid privatization drive of both French and Italian governments, Thomson sold off its share in the European chipmaker, and SGS-Thomson became STMicroelectronics. All the while, the Franco-Italian firm maintained its focus on SoC-centric system-level product development and increasing software content.

SoC: The Big Picture

ST was among the first crop of chipmakers that emphasized system-level products and SoC-centric designs. Many industry observers credit ST’s SoC leverage in its improving chip market ranking. In 1998 and 1999, Dataquest ranked ST at 9[SUP]th[/SUP] place in its annual chipmaker ranking. Fast forward to 2013, according to the IC Insights ranking, ST was the fifth largest semiconductor company in the world.

ST’s focus on SoC technology was obvious at the high-level strategy event it held in Sedona, Arizona in December 2000. ST called it ‘SoC: The Big Picture.’ Jean-Phillipe Dauvi, then ST’s chief economist, told attendees that only chipmakers that offer OEMs system-level package and thus preserve their software investment would eventually win. Other top managers at ST also emphasized how SoC means developing silicon that is tightly linked to final users’ needs.

At the Sedona event, ST also briefed attendees on its SoC design guideline—internally referred to as Bluebook—that served as a database of IP cores, software stacks, middleware and other key SoC building blocks. Bluebook SoC database facilitated IP reuse and included CISC and RISC processor cores, DSPs, accelerator engines and more.

Later, in 2002, ST joined hands with Motorola and Philips to create a joint R&D center in Crolles, France for the development of new silicon architecture and libraries for low-power and high-performance SoCs targeted at consumer and communications devices.


(Image credit: Mouser Electronics)
STMicro’s STarGRID ST7590T system-on-chip for powerline communications

More than two decades ago, ST began the shift from commodity markets toward more specialized SoC products under the leadership of Pasquale Pistorio, who spearheaded the Franco-Italian company’s ascent from a debt-ridden semi-government operation to a semiconductor industry heavyweight. The SoC products allowed ST to take on several fast-growing niches such as disk drives and set-top boxes.

Also Read: A Brief History of STMicroelectronics

The SoC technology continues to develop at a relentless pace and is one of the fastest-growing corners of the semiconductor industry. ST—having been so close to the scenes of convergence at the silicon system level—knows the stakes of the SoC game too well. The lackluster performance of ST-Ericsson—the 50-50 joint venture created through the merger of Ericsson’s mobile chipset unit with ST-NXP Wireless—is a stark reminder of how competitive SoC business has become. ST—an early entrant to the system-level integration—is still at the helm while the SoC industry’s pioneering spirit carries on.

Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronicsand Nokia’s Smartphone Problem: The End of an Icon?


NVM IP now Available for On-Chip MCU Code

NVM IP now Available for On-Chip MCU Code
by Eric Esteve on 01-29-2015 at 11:14 am

As of today NVM IP has been mostly used in SoC or IC to support very specific needs like analog trimming and calibration or encryption key integration for Digital Right Management (DRM) purpose. In other words small size (less than 1K-bit) few times programmable (FTP) NVM IP was enough to support these needs, thus most of the NVM IP market was based on this type of memories, based on antifuse technology or equivalent One Time Programmable (OTP) or FTP when redundancy was inserted to offer more than one programmable capability. This was only a portion of the market need, as if you look at Flash application spectrum, most of it is made of MCU (CPU) code support.

The above mentioned technologies are not cost effective to support on-chip code integration, for two main reasons. At first, if you launch a MCU with on-chip code, your customer will expect to be able to change the program many times, not just a few times. Moreover, as the same customer expect to integrate enough NVM to support the MCU code, the FTP solution quickly lead to a far too large IP block. If the need for on-chip NVM is really crucial, you may want to look at using an Embedded Flash CMOS technology. But if you target a cost sensitive application, the chip cost will be impacted by the high number of extra mask levels to support Embedded Flash CMOS process, leading to (25% or more) higher chip price…


The DesignWare NVM IP port-folio above pictured shows that the “Medium Density” solution just announced by Synopsys should be the best fit if you search for NVM IP able to support MCU code, offering decent write cycle endurance (1K cycles or more) as well as code size up to 64 K-bit and still based on standard CMOS technology. Because this NVM family is based on a completely new architecture, the cell density is 5X better over lower bit count solutions. The Flash access time is less than 40 ns, well positioned to support low power MCU used in IoT for example. Automotive is also a target market for this NVM, so ECC has been added for reliability and the data retention is guaranteed for 10 years… at 125 C.

The high voltage circuitry (charge pump) and the memory array digital control is part of the IP, to ease designer life.

I have focused on MCU code application, in fact such Medium Density NVM IP target is mostly analog IC like:

  • Smart sensors
  • Power Management
  • Touchscreen controllers

But the trend is to integrate microcontroller into Analog ICs, thus this NVM IP is expected to be massively used in the analog market segment. If you need to be convinced that NVM IP is a more cost effective solution, just take a look at the picture below: the curve is representing the IC area function of the bit count. When the IC is small and the flash complexity is very high (say above 256 kbits), embedded flash is better. But for most of the chips MTP is more cost effective (the largest part of the diagram area, on the left side).

More information about the new DesignWare® Medium Density Non-Volatile Memory (NVM) IP family.

From Eric Esteve from IPNEST


Apple’s Implications for Semiconductor

Apple’s Implications for Semiconductor
by Robert Maire on 01-29-2015 at 11:00 am

Apple’s iPhone 6 alone represents at least 50K wafer starts/month plus the iPad (A8X). What could the Iphone 6S/7 & A9 mean? What about the iWatch? Apple is the technology and volume driver of the semiconductor industry so lets take a look at the broader implications.

If we do the math the A8 is 89mm2 and 8.47 X 10.5mm. Using a die calculator that means that roughly 675 die can be squeezed onto a 12 inch wafer. If we assume a 75% yield we get 500 good die per wafer (I like to use round numbers). If we take 74.5 million iPhones over three months that’s about 25 million a month (or 34K an hour…but who’s counting?) Divided by 500 per wafer and you get 50K wafer starts a month, give or take, depending on yield etc… That sucks up a significant chunk of a fab and doesn’t leave a lot of room for other customers.

The A9 could consume more than 100k wafer starts…
If we stretch and presume that Apple forces Samsung into eating the yield loss and goes whole hog at 14nm the wafer starts could get excessively high. We could assume a similar die size A8 though we would bet that the A9 will be larger with more functionality (SoC integration).

If we presume the same roughly 675 potential die on a wafer then apply a guesstimate of 20% yields we would get 135 good die per wafer, if the die size were larger we might only get 100 good die per wafer. If we assume that Samsung can get its act together on 14nm FinFET and manages to get 40% yields then that would be 270 good die at a similar die size.

If we assume that Apple continues to run at 25M phones a month we could see 100K wafer starts a month of capacity dedicated to the A9 (unless yields get better faster). This again, brings us back to our recent newsletter questioning whether the A9 will be 14nm FinFET or not.

The iPad Air 2 – A8X is no slouch either..
The A8x in the iPad Air 2 has 3 cores and 8 GPUs compared to the A8 which is dual core with 4 GPUs. The A8 is small at 89mm2 versus the 128mm2 A8x. The larger die size is due to doubling the number of GPUs and adding a third core. If we guess that each wafer could produce up to 450 A8x parts , using a yield of 75% is about 340 good die per wafer. Apple just sold a bit over 21M iPads in the quarter (7M per month). If we presume that 4M of those will be equipped with the A8X then that’s at least another 12K wafer starts/month.

The iWatch “S1” SOC….

Though not likely to be a big seller at first, we think the iWatch will be a constant climber unlike the DOA Google Glass. Likely to start off slow then ride the coattails of the iPhone and iPad so probably not a lot of wafer starts for the iWatch. We have already heard more buzz about the second generation iWatch even before the first generation is out. My guess would be the first gen will be an early adopter/ learning tool type of device, followed by a more realistic and refined gen2.

Korea, Austin and maybe Malta?
Looking at the number of wafer starts that Apple will be driving, combined with increasing sales then multiplied by crappy yields, we are talking about Apple dominating a number of fabs. This could keep Samsung’s logic fabs in both Korea and Austin very busy as well as GloFo (if they get past the delay).

Then add in memory and support chips…
All we have talked about so far are the main SOC’s for Apple. You have to add in all the memory, both NAND and DRAM as well as all the other many support chips which are not insignificant in terms of fab capacity. Apple has gone to 2GB of DRAM for the iPAD Air 2 after disappointing people with on 1GB on the iPhone 6. Upgrading DRAM on the next iPhone is likely and average NAND installed continues to grow. All in all, we are talking about a lot of fab capacity being driven by Apple and its huge success.

Longer term looks good despite near term slow down….
Despite the fact that the industry is hitting a bit of a soft spot on 14nm foundry and logic roll out, the long term demand for square inches of silicon remains quite good. We see no reason for Apple to slow even when looking at the currency forex headwinds. International sales were 65% and that was with an overly strong dollar. The iPad Pro and the iWatch will be a nice chaser to the amazing Q4 sales of iPhones which blew past all expectations. Luckily the mobile and IOT markets are growing much, much faster than the Wintel Duopoly is fading. Intel is also lucky to have the “cloud” to sell processors to.

Apple driving the bus…..
Apple is firmly in command at the drivers seat of the tech industry bus. Semiconductors, software, servers, IOT, the cloud, financial, media even Google and android are all along for a great ride on our way to over 100M devices a quarter and beyond. Its no longer the VW Microbus that Steve Jobs sold to fund the start of Apple but somewhere he is smiling at his vastly upgraded new wheels.

Robert Maire
Semiconductor Advisors LLC


NoC 102: Using SonicsGN to Address Low Power Requirements From IoT to Servers

NoC 102: Using SonicsGN to Address Low Power Requirements From IoT to Servers
by Paul McLellan on 01-29-2015 at 7:00 am

At the end of last year, I moderated a Sonics webinar to introduce the concept of a network-on-chip or NoC. It was called NoC 101 and the replay is still available here.

Well it is a new year and time for chapter 2. I will be moderating a webinar next Wednesday February 4th at 10am pacific time. Once again the webinar itself will be delivered by Drew Wingard who is the CTO of Sonics. It is entitled NoC 102: Using SonicsGN to Address Low Power Requirements From IoT to Servers.

The performance and power requirements are very different for IoT devices such as wearables, and big server SoC. But it turns out that the same underlying technology, the NoC, can be used in both cases to integrate the large numbers of IP blocks that might be involved, handle the power domains and often the powering up and down of individual blocks of IP.

Modern mobile devices are increasingly pushed to provide greater functionality at lower power. Improved architectures provide the most effective approach to minimizing power by dividing the SoC into a multitude of power and clock domains, ensuring that each domain operates at the lowest power level to satisfy the application requirements. The on-chip network increasingly plays a critical role in both supporting larger numbers of domains and enabling rapid, safe power-state transitions. The arrival of ultra low power devices (and ultra low power processes) is only going to make this more challenging.

Furthermore SoCs utilizing multicore processors often require very high bandwidth communication capabilities between processors, between processors and accelerators, and between these components and a main memory store. The design of these complex devices presents many challenges for the SoC designer. One of those challenges is power management. Power consumption is important for all categories of Multicore SoCs, from battery operated devices where power savings can lead to smaller batteries and/or longer battery life, to line powered devices where power savings are important for cooling reasons as well as packaging and component costs. Even in high performance data centers, electricity and cooling costs can significantly exceed equipment costs.

The Network-on-Chip (NoC) providing on-chip communication plays an important role in the power management strategy of a multicore SoC. This webinar will address many of the techniques used to manage power consumption. These include fine-grain and course-grain clock gating techniques as well as voltage scaling and power switching with auto-wakeup capabilities. Building these techniques into the NoC simplifies the task of implementing an efficient power management strategy at the SoC level.

You can register for the webinar here.

Related Blog


Translating Intel

Translating Intel
by Scotten Jones on 01-28-2015 at 10:00 pm

Some of Intel’s technology posts make some pretty specific statements and I have seen a number of posts where people seem to have misinterpreted what Intel was actually saying.

Multi Patterning
I have seen a lot of confusion on this one with some people saying Intel didn’t use multi patterning at 22nm and others saying Intel used multi patterning at 14nm for the first time. Lets start with where I think the root of this problem began.


Continue reading “Translating Intel”


Qualcomm Earnings Call

Qualcomm Earnings Call
by Paul McLellan on 01-28-2015 at 9:35 pm

Qualcomm had their earnings call today. There has been a lot of discussion about their business going forward and they lowered guidance. They didn’t say explicitly why but in the usual Kabuik theater of these things this is what they did say:While our outlook for the first half of the fiscal year is ahead of our prior expectations, our QCT forecast for the second half of the fiscal year has been reduced due to a number of factors. First, we are currently seeing a shift in share among OEMs at the premium tier, which has reduced the near term addressable opportunity for our Snapdragon processors and has skewed our product mix towards more modem chipsets in this tier. Second, we now expect that our Snapdragon 810 processor will not be in the upcoming design cycle of a large customer’s flagship device, impacting our outlook for both volume and content in that device.
Translation. Apple’s iPhone 6 is doing much better than expected but Qualcomm only supply the modem and Apple designs their own application processors. So that reduces their revenue possibilities. Also, Samsung look like they will use their own Exynos processor rather than Snapdragon 810. Since Samsung is far and away the volume leader (twice Apple) this is a big loss (although they have a big range of phones at different price points and this chip would not have been in all of them). He continued: And thirdly, although we had a very strong competitive position exiting fiscal 2014, we are seeing heightened competition in China at the mid and high tiers. We are continuing to gain share year-over-year with OEMs based in China, but not at the pace we had previously expected. This is in part due to some product challenges with one of our chips in meeting some of the more demanding design points of those tiers. This has provided an opening to competitors who are being very aggressive in order to establish a position in the marketplace, resulting in more pricing pressure than previously expected. Translation: actually just a guess. there is some truth to the overheating problems that are part of the reason Samsung switched away. It looks like some other Chinese manufacturers are holding back, or not ramping, or perhaps switching to Taiwanese MediaTek. They are in the Xiaomi Pro Note which should ship in high volume in China based on past experience. But in the Q&A they pretty much denied it: On the 810, let me be very clear. The device is working the way that we expected it work and we have design traction that reflects that. If you look at the number of designs, it’s over 60. It’s essentially won all the premium designs across multiple ecosystems in China, Windows Mobile, as well as Android. So we’re quite pleased with how that is performing. There is a concern. As you mentioned it’s related to one OEM, and I don’t think you should extend that to imply that something has changed fundamentally between us and that OEM. They talked a bit about the problems with the NDRC (Chinese antitrust regulator). Qualcomm invented CDMA pretty much single handedly and have always had an aggressive royalty and patent license program. I negotiated a deal with them in the 1990s and we ended up walking before they got desperate a year later and we eventually got a chip license. NDRC seems to think Qualcomm are abusing their monopoly position on technology and demanding excessive licensing. But who is to say? NDRC has already said they expect the case to be settled soon. Qualcomm also said on the call that many of their Chinese licensees are under-reporting and so they have increased their auditing. SeekingAlpha transcript is here. It is pretty long I have to warn you.