SMT webinar banner3

Over-under: Apple, 52M iPhones in 4Q

Over-under: Apple, 52M iPhones in 4Q
by Don Dingee on 09-20-2012 at 8:15 pm

I’m in a Twitter conversation with some friends, with the subject: how many phones can Apple ship in the 4th quarter?

A respected analyst said 52M is “an easy mark” for Apple; others are saying 58M is the target for just the iPhone 5 in 4Q. However, the start for the iPhone 5 has been anything but easy. Oh, the orders are probably there – 2M iPhone 5 orders in 24 hours indicates really strong demand.

It’s been a good year. Apple’s 1Q iPhone shipments were 35M units, 2Q were 26M units, and we’re obviously waiting on a figure for 3Q.

The questions for 4Q are two-fold, however. One is supply, the other is demand.

Apple’s supply problem is pointing to their selection of a Sharp 4″ LCD display, at least for the initial build. Rumors have it the yield at Sharp is less than 40%. Other sources are LG and Japan Display, but no word on how well they will ramp up. Sources say capacity of each supplier is something like 7M units per month, if everything goes right. There are also unspoken concerns about the A6 chip inside, but my guess is Samsung has that pretty well in hand. Since the display and memory have been taken away for the iPhone 5, Samsung doesn’t want to be the long pole in the tent on the A6.

Then there’s demand. Apple is now saying they are 3 to 4 weeks from filling the initial orders. I’d also question how prior demand for iPhone 4S units will hold up in the face of iPhone 5 availability. Both those factors may slow down the frenzied pace of orders just a bit. One thing that might help would be if Lucy Koh somehow puts the kibosh on shipments of the Samsung Galaxy SIII. There’s also the remote possibility that the Nokia Lumia 920 and the just announced HTC 8x will ship as expected in time for the holidays, Microsoft willing, and might actually take some demand.

The numbers being tossed around for Apple are huge, to the point where one has to question if they can ship as many orders as they can book. I have no doubt this gets worked out in 2013, but the pressure is on and I think 4Q12 is in some doubt.

I say “under” 52M, with new, untested suppliers in the critical path. Discuss.


Damn! Cramer Figured It Out

Damn! Cramer Figured It Out
by Ed McKernan on 09-20-2012 at 8:04 pm

As an investor, one has to always be aware when Jim Cramer informs the world of the investment scenario you have been playing comes out of the shadows and sees the light of day. Soon the herd will follow which is positive, but now one has to figure how long to ride the roller coaster. In an article posted on thestreet.com entitled “Tech is Treacherous”, Jim Cramer bemoans the fact that it is September and the annual seasonal trade for technology stocks doesn’t appear to exist outside of Apple and its derivatives (Qualcomm, Broadcom and Cirrus Logic). What is going on?

The PC, if I can be blunt is becoming unfashionable. I must have bought or built at least 15 of them since the late 1980s, but I have recently been taken in by my wife’s iPAD with 4G. If I can get my company to outlaw powerpoint, then I can completely convert and drop a few pounds from my Hartman bag. Many of you are probably thinking the same thing and it doesn’t necessarily have to be Apple. Samsung and Amazon have you covered as well. We want devices that can be carried in only one hand and the option to have a keyboard or not.

The economics and the trends of the Mobile Tsunami are shaping up to be more powerful as they become substitutions and not additions to our PC environment. Winners become losers, or should I say travellers on the road to extinction because the Device Makers have decided to go Vertical in the supply chain, cutting out Intel, nVidia, AMD while reducing Toshiba, Micron, Elpida and others to a subservient status. Apple makes 90% margins on NAND Flash upgrades as it rides the cost curve down without increasing density and therefore this makes it difficult for NAND players to envision a future of making money.

The battlefield for the coming year is to be played out in the $500-$999 market segment that Intel has designated as the heart of the volume ultrabook computer. Apple has held the Mac Air up at $999, allowing the iPAD to be its product line to occupy the space with its average performing $20 processor. Meanwhile, Intel expects to receive between $100 – $200 for its ultrabook processors. The dilemma then is not just the processor price disparity but the form factor and the O/S. If the consumer and corporate purchaser turn away from 13” LCD based notebooks or the Microsoft Win 8 O/S, then there is no price that Intel can sell processors at that will win the market. Just to be clear, I do believe there will be a >$1000 corporate market for notebooks. It will, however, be the consumer who turns first and serve as the canary in the coal mine.

And so back to Jim Cramer. He is getting close to the truth without understanding the complete picture. Back in the good old mainframe days, IBM was vertically integrated and highly profitable. In the early 1980s however, they found themselves way behind the curve in the nascent personal computing market. Don Estridge’s newly formed group was given the flexibility to outsource the semiconductor components and the Operating System, thus the creation of Wintel and what would become the vast horizontal supply chain that profited immensely during the next 20 years. Now, though, we are returning to the Vertical Integration Model through the example of Samsung and Apple. Google and Amazon may follow, but options are limited. Perhaps Intel will step in to be the Semiconductor Foundry for $15 processors and $25 baseband chips and hope that the Foundry model doesn’t creep up in to the higher PC price bands and swallow them like Ivy Bridge is expected to do to AMD and NVIDIA.

The surprise to many this year is that the Smartphone and Tablets are pushing the envelope on leading edge process. One node back will not do when low power and integration drive out costs or increase battery life. Global Foundries announcement that they will have a 14nm production process in 2014 completes the picture that every foundry has to jump into the leading edge pool or be left high and dry. This however doesn’t guarantee profitable. More likely Qualcomm and Apple will continue to squeeze down on the Foundries who will be investing heavily in semiconductor equipment and thus create a new, unexpected winner in the Mobile Tsunami Race: ASML.

If this scenario unfolds and ASML turns out to be a growth proxy for the Leading Edge Semiconductor Industry then it will be ironic that the investment provided to it by Intel, Samsung and TSMC will be in many ways analogous to IBM buying a 20% stake in Intel in the early days of the PC to ensure that it had the R&D funds to develop future x86 processors. That investment turned out to be exactly at the beginning of the great semiconductor growth cycle of the 1980s.

FULL DISCLOSURE: I am Long AAPL, INTC, QCOM and ALTR. I am not invested in ASML but may do so next week or afterwards.



2nd International Workshop on Resistive RAM at Stanford

2nd International Workshop on Resistive RAM at Stanford
by Ed McKernan on 09-20-2012 at 8:02 pm

A Veritable who’s who of ReRAM researchers will be present at the 2nd International Workshop on Resistive RAM at Stanford in the beginning of October. Sponsored by IMEC and Stanford’s NMRTI (Non-Volatile Technology Research Initiative), the program features two days of talks, panel sessions and no doubt lots of networking. The advanced program is online and over at ReRAM-Forum.com, moderator Christie Marrian has some observations on the topics and presenters. While there is a true global presence, there are some intriguing sounding talks from local Si valley companies, Adesto, Intermolecular and Rambus. For more information on the workshop visit www.ReRAM-Forum.com.


Automating Complex Circuit Checking Tasks

Automating Complex Circuit Checking Tasks
by SStalnaker on 09-20-2012 at 7:24 pm

By Hend Wagieh, Mentor Graphics

At advanced IC technology nodes, circuit designers are now encountering problems such as reduced voltage supply headroom, increased wiring parasitic resistance (Rp) and capacitance (Cp), more restrictive electromigration (EM) rules, latch-up, and electrostatic discharge (ESD) damage, which are all sources of possible circuit failure. While none of these effects suddenly appeared in the nanoscale era (sub-130nm), they have become progressively more serious, and must now be addressed during circuit verification to ensure robust designs and reliable operation.

ESD protection is one of the most important reliability issues in today’s CMOS integrated circuit (IC) products. ESD failures caused by thermal breakdown due to high current transient, or dielectric breakdown in gate oxide due to high voltage overstress, can result in either immediate failure of IC chips, or gradual degradation of circuit performance. To obtain high ESD resistance, CMOS ICs must be designed with on-chip ESD protection circuits at the input/output (I/O) pins and across the power lines. Turn-on-efficient ESD protection circuits clamp the overstress voltage across the gate oxide to limit the voltage swing.

The conventional approach to checking such circuitry is to use DRC rules to target areas of a layout that have potential reliability issues to check, which requires adding extra layers called “marker layers” to the IC layout database. Adding marker layers is a manual process, making it ripe for mistakes, and requires additional DRC runs, which extends verification time.

A better solution is provided by a circuit verification tool that automatically identifies the reliability-sensitive geometries and applies specialized physical checks to catch real design problems that impact circuit reliability. The Calibre® PERC™ tool uses information from an IC’s netlist to identify circuit topologies of interest, such as possible ESD paths and probable electrical failure sources, as well as the presence or absence of ESD protection circuits. For example, protection circuits are identified by recognizing device types and connectivity patterns based on any of the ESD configurations commonly known and widely used in designs today (Figure 1). Beyond verifying that the required circuits are present and connected properly, Calibre PERC measures the layout to ensure that interconnects are sufficiently sized to meet a maximum current density requirement. This can be accomplished without the need for marker layers or other manual interventions.

Figure 1. Some commonly used ESD protection configurations

The ESD EDA working group has defined and recommended 39 ESD checks that can be implemented using Calibre PERC to assure a reliable design that is robust against ESD events and electrical failure. In these checks, ESD verification is performed on multiple levels, from cell level to package level, including intra-and inter-power domain ESD checks.

As an example, to implement verification of ESD protection between I/O and power rails, the tool performs the following steps:

[LIST=1]

  • Identify the core circuit that requires ESD protection.
  • Check that the correct ESD protection devices are being used, and are appropriately connected to the core circuit.
  • Check that there are no violations in the ESD protection device connection polarity.
  • Check the sizes of ESD devices against existing ESD sizing rules (to meet current density requirements, clamp width requirements, etc.).

    Since the tool is programmable, ESD checks can easily be expanded to cover parasitic extraction on metal interconnects, and design rule checks on selected ESD topologies, protected circuits, or violating circuits. It can also validate electrical compliance by performing resistance and current density analysis to ensure that the wiring of these devices is robust enough to support the largest currents that can be induced (Figure 2).

    Figure 2. Calibre PERC can address P3P extraction in the physical design, perform current density analysis, and execute design rule checks on identified topologies.

    Automatic voltage propagation of input sources through the design, and voltage-dependent DRC checks, provide additional capabilities to simplify complex circuit verification. For example, the tool can be used to perform advanced electrical rule checks (AERC), which identifies signal lines crossing voltage domains in mixed-signal or multi-domain low power digital designs.

    Additional information about advanced ESD checking can be found in the white paper entitled Solving Electrostatic Discharge Design Issues with Calibre PERC.

    Hend Wagieh is a Senior Technical Marketing Engineer for Calibre Design Solutions at Mentor Graphics. She may be contacted at hend_wagieh@mentor.com.


  • Schematic Capture, Analog Fast SPICE, and Analysis Update

    Schematic Capture, Analog Fast SPICE, and Analysis Update
    by Daniel Payne on 09-20-2012 at 1:10 pm

    At the DAC show in June I met with folks at Berkeley DA and heard about their Analog Fast SPICE simulator being used inside of the Tanner EDA tools. With the newest release from Tanner called HiPer Silicon version 15.23 you get a tight integration between: Continue reading “Schematic Capture, Analog Fast SPICE, and Analysis Update”


    GlobalFoundries Announces 14nm Process

    GlobalFoundries Announces 14nm Process
    by Paul McLellan on 09-20-2012 at 8:00 am

    Today GlobalFoundries announced a 14nm process that will be available for volume production in 2014. They are explicitly trying to match Intel’s timeline for the introduction of 14nm. The process is called 14XM for eXtreme Mobility since it is especially focused on mobile. The process will be introduced just one year after 20nm, so an acceleration of about a year over the usual two year process node heartbeat.


    The process is actually a hybrid, a low risk approach to getting (most of) the power and performance advantages of 14nm without a lot of the costs. The transistor is a 14nm FinFET but the middle-of-line and back-end-of-line are unchanged from 20nm, specifically 20nm-LPM process. So in effect it is 20nm process with 14nm transistors (at 20nm spacing). But this process gets a lot of the advantage of 14nm. It has 56% higher frequency at low operating voltages and 20% higher at high operating voltages. Or the voltage can be reduced by 160-200mV resulting in 40-60% increase in battery life.

    Of course there is not really any area reduction from 20nm since all the metal pitches are the same. But it avoids the cost of having even more layers requiring double patterning and avoids needing to contemplate triple patterning. This is something I’ve been talking about recently, since there is a risk that future processes have higher performance and lower power but not lower cost. In the short term for existing high margin products like iPhone 5 this might not matter much, but in the long run it is the 1000 fold reductions in cost that drives electronics. If we hadn’t had that then we might still be able to make an iPhone in principle but it would cost as much as a 1995 mainframe of equivalent computing power.

    GF have “fin-friendly migration” (FFM) design rules which they reckon should make migration planar designs, especially if they are already in the GF 20nm process that forms the basis for 14XM since only the transistors have changed.

    Test silicon is being run in GlobalFoundries Fab 8 in upstate New York. Like other foundries, GF have been working on FinFETs for years even though they have none in a production process. Because the only thing that is new is the 14nm FinFet transistor, early process design kits are already available and customer product tape-outs are expected next year with volume production in 2014.


    For more information on FinFETs (not specifically GlobalFoundries) here is my blog on a talk from Dr Chenmin Hu, the inventor of the FinFET. It explains the motivation for going away from planar transistors to FinFETs (or FDSOI, the only alternative technology that seems viable at present).


    ReRAM Based Memory Buffers in SSDs

    ReRAM Based Memory Buffers in SSDs
    by Ed McKernan on 09-19-2012 at 11:40 am


    In a paper at the VLSI meeting in Hawaii, Professor Ken Takeuchi described using an ReRAM buffer with an SSD. He points to some major performance gains that one can expect from such a configuration in terms of energy, speed and lifetime. Is this an opportunity for ReRAM that could spur development of the technology? Read more in a post over at www.ReRAM-Forum.com….


    High Speed PHY Interfacing with SSIC, UFS or PCI express in Smartphone, Media tablet and Ultrabook at Lower Power

    High Speed PHY Interfacing with SSIC, UFS or PCI express in Smartphone, Media tablet and Ultrabook at Lower Power
    by Eric Esteve on 09-19-2012 at 10:54 am

    We have recently commented the announcement from MIPI Alliance and PCI-SIG, allowing PCI Express to be used in martphone, Media tablet and Ultrabook, while keeping decent power consumption, compatible with these mobile devices. The secret sauce is in the High Speed SerDes function selected to interface with these high data bandwidth protocol controller, like SuperSpeed USB Inter Chip (SSIC) from USB-IF, Universal Flash Storage from JEDEC and PCIe from PCI-SIG. The second PHY defined by MIPI Alliance: M-PHY, first specified in April 2011 at 1.25 Gbits/s, recently updated (June 2012) to run at 2.9 Gbits/s and a third generation coming next year will hit up to 5.8 Gbits/s.

    You also may read the announcement from the MIPI Alliance: “MIPI® Alliance M-PHY® Physical Layer Gains Dominant Position for Mobile Device Applications”, and see the quote from Joel Huloux, MIPI Alliance Chairman at the end of this post…

    In order for the information to be complete, the D-PHY (first PHY specification defined by the Alliance) was developed primarily to support camera and display applications, and is now widely used in the systems shipping now. But M-PHY is the function that should be integrated today to cope with the data bandwidth demand in the Gb/s range, and can be compared with SATA 6G or PCIe gen-2, but at lower power consumption. This is the key word for mobile application like smartphones and media tablet, this is exactly the feature which explains why ARM based Application Processor dominates the mobile systems, as strongly as X86 based chipset dominates the PC world-but not the mobile world. In fact, it would be very nice to have actual figures for the M-PHY power consumption, say ay 2.9 Gbit/s on a 28 nm (low power) technology node, and to compare it with PCI Express or USB 3.0 PHY at a similar speed and technology node. I am sure that this information is available “somewhere”, sharing it within the industry could be a good idea…

    According with an analyst quoted in the PR from the MIPI Alliance “MIPI’s D-PHY interface is currently the dominant technology in mobile devices and we anticipate that its M-PHY interface will follow suit.” There is at least a good reason why D-PHY is dominant: if you remember, this function has been created to support Display and Camera interfaces. Display was relying on Low Voltage Differential Swing (LVDS) signaling, a pretty old technology, forcing to use wide parallel busses at the expense of real estate hungry connector, so a serial based solution like D-PHY and the Camera Interface Specification (CIS-2) was welcomed. Camera Controller IC interfaces with the Application Processor was varying with the chip supplier, not a comfortable situation for the wireless phone integrator, so Display Interface Specification (DSI) and D-PHY was also welcomed.

    This explains why CSI-2 and DSI were among the first MIPI specifications to be widely adopted. But the Camera Controller chip makers, integrating the controller on the same IC than the image sensor, on a mature technology node like 90nm, may have some hard time integrating a multi-Gbit/s SerDes on such a technology. The story is slightly different for the Display controller chip, but I suspect the high voltage needed to drive the display to lead to the same situation: mature technology node are not high speed SerDes friendly! It may be that D-PHY will still be in use for some time, to support Display and camera Interfaces, and this is consistent with the feedback from IP vendors like Synopsys, still seeing demand for D-PHY in 28nm…

    Now, we can come to the up-to-date M-PHY, which, as wisely noted by a cleaver analyst “…will follow suit”. To prepare the quote I have made for the MIPI Alliance in the M-PHY related PR, I have interviewed several IP vendors, as I thought their answer would be a good indication about how popular was the function. So, my question was “How many RFQ did you received (for M-PHY)?”, as I knew that asking about the number of design-in, or sales, would not be answered. As they knew that the answer may be published, let’s have a look at the respective answers from Synopsys, Mixel, Cosmic Circuits and Arteris.

    Arteris does not sell any PHY IP, but their Low Latency Interface IP has to be integrated with M-PHY. Arteris is claiming that LLI is used in ten projects, if you consider that the specification has been issued in February this year, that a pretty good adoption rate! Most probably, the fact that LLI allow sharing a single DRAM between AP and Modem, leading to save the cost of one DRAM, is a good enough incentive! Arteris perception of the market is that the AP chip makers prefer using in-house designed M-PHY, at least for the time being. Our perception is that the chip makers adopting LLI are probably the market leaders, and they have PHY dedicated design teams. This may change when the Tier-2 and 3 will also adopt LLI, according with Kurt Shuler, VP Marketing for Arteris.

    This remark from Arteris makes the answer from Navraj Nandra, Director of Marketing for PHY IP products with Synopsys, even more interesting, as Synopsys claim to have had 120 RFQ for M-PHY. Let’s make it clear, a Request For Quotation (RFQ) does not mean that a product will be sold at the end. But that’s a good indication about the market behavior in respect with M-PHY.

    I would like to thank Ganapathy Subramaniam, CEO of Cosmic Circuits, as he was the first to answer (during the week-end), and saying that the company has seen 10+ RFQ for the M-PHY. As far as I am concerned, my estimate would be that the M-PHY could generate up to 20 IP sales in 2012, but I may be optimistic, if we consider that many of the functions will come from internal sourcing…

    The year 2013 could be very interesting to monitor, as numerous functions, like PCI Express, UFS, SSIC on top of the MIPI specific functions like LLI, DigRFV4, DSI-2 or CSI-3, could be integrated and leading to controller IP sales (for the above mentioned IP) and M-PHY IP sales. This could happen, not only for chips used in mobile systems, but also in the traditional PC and PC peripherals segments, as wisely noted by Brad Saunders, Chairman/Secretary of the USB 3.0 Promoters Group: “MIPI’s low-power physical layer technology makes it possible for the PC ecosystem to benefit from the SSIC chip-to-chip interface”.

    Quote from MIPI Alliance Chairman: “M-PHY has truly become the de-facto standard for mobile device applications requiring a low-power, scalable solution,” said Joel Huloux, Chairman of the Board of MIPI Alliance. “We are pleased to join with our partners at JEDEC, USB IF, and PCI-SIG, and MIPI member companies, to advance solutions that push the envelope in interface technology.”

    Eric Esteve – from IPNEST

     

     


    Synopsys-Springsoft: Almost Done

    Synopsys-Springsoft: Almost Done
    by Paul McLellan on 09-19-2012 at 8:01 am

    Synopsys announced today that they had completed the two main hurdles to acquiring SpringSoft. Remember, SpringSoft is actually a public Taiwanese company so has to fall in line with Taiwanese rules. The first hurdle is that they have obtained regulatory approval in Taiwan for the acquisition (roughly equivalent to FTC approval in the US, I think). And second, over 51% of the outstanding shares of SpringSoft have been tendered (roughly equivalent to voting in favor of the merger: once 51% shares have been tendered then Synopsys has to purchase the remaining shares and take over the company). Synopsys expects the deal to close definitively on October 1st next week.

    UPDATE: actually Synopsys contacted me to point out that the deal doesn’t actually close on October 1st. They will take control of SpringSoft on that date and will start to consolidate SpringSoft financials into Synopsys financials. The deal will technically close later.

    Springsoft has two product lines that are completely separate except sometimes being purchased by the same company.

    The Laker (Laker[SUP]3[/SUP] is the latest version) product line is a layout editor with a lot of advanced analog placement and routing capability. Synopsys already had a layout capability. Then they bought Magma who had another, reputedly better one. Then they bought Ciranova who had a well-regarded analog placer. Now with SpringSoft they have yet another. I have no idea which technology will survive or how they will merge all this in the end.

    The Verdi (Verdi[SUP]3[/SUP] is the latest version) product line is for functional verification. SpringSoft don’t have their own simulation capabililty but Verdi allows you to manage the verification process and analyze results. Synopsys also have verification environments (not to mention simulators). Again, over time, presumably these technologies will be merged.

    They also have some interesting FPGA-based technology called ProtoLink. This allows FPGA-based verification to be done much more effectively by being able to change which signals are probed on the fly without needing to recompile the entire netlist, which for a big design is very slow. I don’t think Synopsys have anything like it.


    SemiWiki? There’s an App for that

    SemiWiki? There’s an App for that
    by Paul McLellan on 09-18-2012 at 7:20 pm

    The iPhone version of the SemiWiki App is now available. Download it from the iTunes store, it’s free. The App allows you to look at similar things to the website but much more conveniently adapted to fit the small screen. When you first start up the App you can log in (assuming you are a SemiWiki registered user and why would you not be).

    For example, here is the front page of Semiwiki, with all the articles in order. You can, of course, scroll to get to the more articles below. And if you tap on any particular article you get to read it, formatted for the small screen.

    It only went live today so I’m not going to claim that I’ve had extensive experience of playing with it, but it certainly seems easy to use and everything seemed to work as you would expect.

    Daniel Payne did a more detailed explanation of how to navigate within the App here. He was actually discussing the Android version of the App but since they area essentially the same (once you’ve got the App installed) you can read his blog entry and it all applies just the same on iPhone except that to get to the “8 choice” top menu you tap on the “home” button in the middle of the top row. That will get you to here:


    The App is a big advance on just trying to run SemiWiki’s regular website in Safari (or any other browser) where everything is always the wrong size, and it takes more clicks that you would like to navigate around.

    It’s on the App Store here.