RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Intel Inline with reduced expectations-2015 flat to down-Slashing Capex

Intel Inline with reduced expectations-2015 flat to down-Slashing Capex
by Robert Maire on 04-16-2015 at 4:00 pm

Intel Inline with lowered numbers- 2015 Revs to be Flat…
Capex Slashed by 13% to $8.7B- 10nm at risk???
Mortgaging the future???
Is the foundry business dead???
Desperately seeking growth!!!

Intel Inline…

Intel reported revenues of $12.8B and EPS of $0.41 in line with downward revised estimates after chopping $1B out of Q1 previous expectations. Client computing was the main culprit for the hit down 16% Q/Q and down 8% Y/Y. Data center was down 10 sequentially but up 19% Y/Y.

The company sounds like it is hoping for some upside from the summer rollout of Windows 10 but we wouldn’t hold our breath waiting for positive impact from that introduction as we think the XP upgrade is well played out already. Guidance is for revenues to be flattish for 2015 with an obvious bias to the downside. Gross margin will be good at 61% obviously helped by reduced spending. We didn’t hear much positive hope for desktop PC’s other than the several hopefully comments about Windows 10.

We also didn’t hear a lot about tablets or mobile as those numbers are now buried in the financial results where they can’t be as easily picked apart.

No comment on M&A but lots of questions…
As expected there was no comment on the Altera rumors but it is clear from the poor momentum of the PC business that Intel has to look elsewhere for growth

Capex Cut is ominous…

Cutting capex down to $8.7B is very ominous from our perspective. Obviously Intel can help make its earnings numbers and get EPS growth in a flat revenue environment by cutting expenses of which Capex is the major number. Intel had already fallen to the number three position behind Samsung and TSMC and looks to be falling to a distant third place very quickly.

The company made the normal excuses on reuse and moving 22nm capacity to 14nm but we don’t buy that as the full reason. We find it hard to believe that Intel is that much better at reuse and efficiency than TSMC and Samsung. Capex is being cut because business is not that good. Intel did comment that yields and ramp of 14nm was better than planned but its been a long time and it should be very good by now.

You don’t get to be a leading player without spending a lot of money and we are very concerned about the company mortgaging its future to make near term earnings numbers to satisfy the street. We think this is potentially short sighted as it will help near term numbers but put in jeopardy Intel’s technology dominance that made the company what it is today. We hope we are wrong.

10nm at risk???
Though Intel refused to comment about 10nm timing it is clear from everyone in the industry that they have pushedout spending and its impossible to push out spending and reducing capex as much as they have without slowing their 10nm schedule. Our guess is that we are in the range of a 9 month to one year delay from what it could have been. 10nm was supposed to be in Israel and though Intel talked about 10nm spending in the second half of 2015 we haven’t heard a lot about it in the field.

Can TSMC catch Intel at 10nm???
Given TSMC’s increasing capex coupled with their stated aggressive 10nm plans it feels as if TSMC has a real opportunity to catch up with Intel at 10nm. There is likely a continuing downward bias in capex at Intel and its likely that management will be looking at ways to cut or delay spending to support EPS.

Foundry noticeably absent from call….
You wouldn’t know that Intel was allegedly in the foundry business from the conference call as there was zero mention of it. If TSMC does either catch Intel at 10nm or come close then the reason for a fabless customer to use Intel foundry services would fall to below zero as the only reason we can think of is their technology lead. If they lose that then foundry is officially out of business as its more expensive than TSMC and Intel is harder to work with.

Intel cutting 22nm capacity…
Intel did comment that 22nm capacity would be cut as fab capacity transitioned to 14nm. This supports to support the weaker demand environment as Intel is usually able to milk returns out of older fabs for a longer period of time. To be cutting 22nm capacity is a sign that 14nm is good but also a sign that demand is not strong enough to keep it pumping out devices.

Equipment companies likely Whacked…
The Intel capex news while not unexpected is probably a lot worse than most bullish analysts and equipment company management were hoping for. The Intel cut obviously adds to the already strong industry headwinds we have been talking about. At this point the equipment industry is standing on the one leg of memory spending as foundry and Intel (the other leg) aren’t supporting the weight. If memory spending gets more wobbly we are going to topple over.

Beware of Intel exposure…
Equipment companies that rely on Intel are obviously getting hurt but its not like anyone was expecting an increase. If Intel’s business is just a proportional share of an equipment company’s business its still hard to figure out whats going to increase to make up for the Intel shortfall. We are now potentially looking at a down year for capex as a possibility….

Robert Maire
Semiconductor Advisors LLC

Also read: Moore’s Law is dead, long live Moore’s Law – part 1


Sensing Without (much) Power

Sensing Without (much) Power
by Paul McLellan on 04-16-2015 at 7:00 am

Do you have one of those step-tracker things? They seem to be one of the earliest IoT devices that are actually selling in large quantities. Smartphones are also starting to contain this sort of sensor to provide similar functionality without requiring a separate device, as are smart-watches such as the Jumpy watch for kids on the right.

Do you know what the three top things people complain about are?
[LIST=1]

  • the app is no good
  • the sensor is not accurate
  • the battery life is too short

    One of the leaders at making the guts of this sort of product is QuickLogic with their ArcticLink 3 sensor hub along with the SenseMe agorithms that take raw sensor data and turn it into step counts. It can sense taps, wrist rotation, tell the difference between walking, running, cycling and swimming. Whether the device is in your pocket or not on your person. Whether you are asleep. For all I know it can even tell what else you might be up to in bed.


    QuickLogic is not responsible for the App directly, although they do provide drivers for Android devices using their hardware. But they have a lot to do with the other two complaints.

    The current version of ArcticLink has power as low as 75uW. Since it is always on, this is important. Further, since the sensor can detect “device not on person” it can further optimize system power by, for example, turning off I2C-bus since it won’t be needed until the device is worn again. The processor inside the QuickLogic sensor hub is a microDSP-like architecture. On its own the processor is just 32uW. That is a lot lower, for comparison, than the most miserly ARM Cortex microcontrollers. In a smartphone the power consumed by the sensor hub is a small part of the overall power consumption, but in smaller devices going from 150uW to 75uW is the difference between 3 days and 7 days or even 1 month and 2 months.


    The SenseMe algorithms are more accurate than other pedometers. There are really two forms of accuracy. The first is when you are actually moving and the device is at your ear, or in your backback or strapped to your arm. How accurate is the count? The other form of accuracy is when you are not actually walking: in a car, standing still and so on. Does the device correctly notice and count zero? A lot of competing devices do badly at this and it is very noticeable since the user knows that the count should be zero. Only the most anal of users is going to count 1000 steps and see if the pedometer got the correct answer.


    Another use for the hub in a smartphone is to help with device recovery. The sensor knows that the phone has been put down and can even tell that it has been dropped. The GPS location can be sent back to enable faster recovery even if subsequently the battery runs out and the phone is not reachable. If we all start wearing smart-watches there is no reason your watch couldn’t tell you exactly where you left your phone. Insurance companies are very interested in this too, since they pay out when people lose their phones, and so they have an interest in people losing them less often.

    Quicklogic is the only solution provider with the power consumption of a custom ASIC, the flexibility of an MCU and the algorithm capabilities of a software company.

    Details on the ArcticLink 3 S2 are is here.


  • Moore’s Law is dead, long live Moore’s Law – part 1

    Moore’s Law is dead, long live Moore’s Law – part 1
    by Scotten Jones on 04-15-2015 at 10:00 pm

    April 19th is the fiftieth anniversary of Moore’s law! We thought it would be a good opportunity to reflect back on fifty years of Moore’s law, what it is, what it has meant to the industry, what the current status of the law is and what we may see in the future.

    Moore’s law
    Moore’s law is so well known that you wouldn’t think we would need to restate it, but the fact is that many people misunderstand and misstate the “law”. In Electronic Magazine on April 19, 1965 Gordon Moore wrote: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year”. This observation became known as Moore’s law.

    Cramming more components onto integrated circuits
    By Gordon E. Moore, Electronics, Volume 38, Number 8, April 19, 1965

    At the ASMC last year, after one of the talks someone got up and said Moore’s law was a technology law but has now become an economics law. As you can see from the actual “law” it has always encompassed economics. I have seen people talk about transistors per unit of horizontal area and many other versions of the “law” that aren’t in keeping with what was originally said. Even Intel’s on-line museum doesn’t accurately quote what Moore said.

    What I believe happened after Moore’s article was published is his observation became a benchmark for the industry. Companies realized that if the industry was following such a rapid integration and cost reduction path, that individual companies must follow the same path at the same rate or be left behind. This led to a kind of technological arms race that has endured to this day.

    But Moore’s laws influence was even wider than just driving manufacturing costs and pricing. Exponentially increasing component counts and exponentially decreasing costs have resulted in the system you couldn’t build last year “because it was too complex and too costly”, becoming possible, then affordable and finally widely used in just a few short years. When Moore wrote his article many of the products we take for granted today didn’t even exist, for example personal computers, tablets and cell phones to name just a few. All of these products only became possible because of integration and cost reduction.

    The resulting new products have driven incredible growth in semiconductor revenue from less than $1 billion dollars in 1960 to approximately $10 billion dollars in 1978, over $100 billion dollars in 1994 and over $350 billion dollars in 2014. When Moore’s law finally ends it has huge implications for the entire semiconductor industry and electronics industries the semiconductor industry supports.

    Moore’s law in Action
    In case you are wondering what Moore’s law has looked like for the semiconductor industry, figure 1 illustrates the price trends for a variety of products.

    Figure 1 includes several data series:
    [LIST=1]

  • Worldwide prices for one million transistors. This is calculated by taking the worldwide semiconductor revenue and dividing it by the estimated transistors produced per year.
  • Intel price per million instructions per second of microprocessor power. This is likely the measure on the graph subject to the most error in interpretation. For many years we have estimated the processing power of Intel processes in millions of instruction per second (MIPS). There was time when Intel directly reported this number but then over time we have had to correlate against a series of new benchmarks. In the last few years we have abandoned this effort but there is still a lot of historical data. The price we pick each year is the processor with the lowest price per MIP based on Intel’s published price list.
  • Price per megabit of memory for DRAM and NAND Flash. This is simply the worldwide revenue for each memory type divided by the worldwide bits produced.
  • A trend line with a 35% per year reduction in price.

    Figure 1. Price per function trend. Source, IC Knowledge.

    As you can see from the figure, since 1980 transistors, DRAM and Intel processor prices have all followed the 35% per year price reduction closely meaning prices have actually dropped more than in half every two years! It is hard to see in the figure but for 2013 DRAM prices per megabit actually went up and then in 2014 returned to roughly the same places they were in 2012. Although this is reflective of improved pricing power for the DRAM manufacturers, it also coincides with issues in continued DRAM cost reduction that will be discussed in a later article in this series.

    It is also interesting to note that initially NAND Flash prices fell much faster than the 35% reduction seen for other products, although since around 2008 NAND prices reductions have begun to moderate and more closely follow the 35% trend. This graph and analysis are based on price trends but what about manufacturing costs? Price and manufacturing costs are related by the following:

    Price = Manufacturing Cost + Gross Margin

    Examining our data on gross margins we see flat to slightly increased gross margin over the last several decades meaning that manufacturing cost must be declining at least as fast as prices. In the next installment of this series we will examine manufacturing and see how this incredible cost reduction has been accomplished. We will then examine the history, current status and future prospects of Moore’s law for DRAM, Flash and Logic.

    Also read:
    Moore’s Law is dead, long live Moore’s Law – part 2
    Moore’s Law is dead, long live Moore’s Law – part 3

    Moore’s Law is dead, long live Moore’s Law – part 4
    Moore’s Law is dead, long live Moore’s Law – part 5


  • Nokia on Top of the World, Again

    Nokia on Top of the World, Again
    by Majeed Ahmad on 04-15-2015 at 4:00 pm

    Nokia is no more a mobile phone dynamo, but it’s now the world’s largest telecom equipment supplier ahead of Ericsson AB and Huawei Technologies. Nokia is buying Alcatel-Lucent for $16.6 billion and the new global networking behemoth created as a result of this mega-merger—called Nokia Corp.—will be headquartered in Finland.

    The second edition of my book “Nokia’s Smartphone Problem” released in November 2014 has a full chapter dedicated to the making of new Nokia from the ashes of a mobile phone giant. The book argues that the Finish handset titan had started taking its wireless network infrastructure business far more seriously after Nokia found itself flat-footed in the post-iPhone mobile era.


    Nokia’s Smartphone Problem chronicles fall in smartphones and rise in LTE infrastructure

    Nokia’s bid to acquire Alcatel-Lucent is merely the continuation of the journey that it started with the creation of the Nokia Siemens Networks (NSN) venture in 2006. Nokia combined its networks hardware business with that of Siemens and began to offload non-core assets like telecom consulting services and fiber-optic businesses. Initially, NSN, like most of its competitors, was operating in both wired and wireless infrastructure markets.

    Nokia’s focus on the Long-Term Evolution (LTE)-based 4G infrastructure business played nicely for NSN, and by 2012, it had moved from fourth to second place in the LTE equipment ranking. In 2013, when Nokia sold its mobile handset business to Microsoft, the Finnish firm also paid Siemens $2.21 billion to gain full control of the NSN venture. Now the acronym NSN stood for Nokia Solutions and Networks instead of Nokia Siemens Networks. Eventually, it settled on Nokia Networks.

    There was no doubt left in the early 2010s where Nokia was heading. The ‘New Nokia’ was inevitably about mobile infrastructure business. And the fact that Nokia had brought NSN chief Rajeev Suri to head the remaining Finnish company further cemented the notion that the New Nokia saw its future in the wireless equipment business. Fast forward to 2015, the Finnish mobile firm has completed its make-over by gobbling up a competitor of the size and scale of Alcatel-Lucent.


    Nokia buys Alcatel-Lucent for $16.6 billion
    (Image: Reuters)

    The book “Nokia’s Smartphone Problem” has made the case that the Finish electronics giant could reinvent itself like Apple and IBM. Nokia has a history of successfully adapting to market shifts, the book argued, and that Nokia’s wireless gear business could make up for its smartphone debacle. “Things could change rapidly in the technology world … and the wireless market is still a wide-open field,” the book concluded.

    Majeed Ahmad is the author of Nokia’s Smartphone Problem: The End of an Icon? The book is available in both paperback and e-book formats.


    IoT Security: Your Refrigerator Attacks!

    IoT Security: Your Refrigerator Attacks!
    by Paul McLellan on 04-15-2015 at 7:00 am

    Every time I see a presentation on IoT the forecast for the number of devices in 2020 seems to go up by a few billion. But behind the hype there are clearly going to be a large number of devices on (and even in) our bodies, our homes and cars. Not to mention in factories and workplaces. IoT devices cover a wide spectrum. Realtors like to expand desirable neighborhoods as much as they can to include whatever property they have to sell, so areas like San Jose’s Rose Garden or San Francisco’s Noe Valleygradually grow. In the same way, marketers like to jam everything they can into the IoT category even though we previously had perfectly good categories like automotive or medical. Two things, though, seem to be common to almost every IoT application: low power and security.

    Proofpoint found a wonderful example of security problems with IoT:What startled Proofpoint researchers, though, is the fact that 25% of the messages didn’t originate from the usual suspects (i.e., laptops, desktops, or smartphones). Instead, they came from connected devices, such as home-networking routers, televisions—and at least one refrigerator.

    Perhaps more worrying than being spammed by our refrigerators is infiltration of our IoT devices. If we are going to have self-driving cars we want to make sure that we decide where our car goes. If we are going to have medical devices that can, say, adjust insulin to match blood sugar levels then we want to be sure that nobody else can take control. This is not just a theoretical issue. At the end of last year, a steel mill in Germany was hacked causing “massive damage” when a blast furnace could not be shut down.

    It is increasingly clear that security requires a mixture of hardware and software. The heart of any security scheme is software algorithms along with something secret, typically encryption keys. These need to be kept in the hardware of the device so that:

    • the keys cannot be read by examination of the hardware
    • the keys are not lost when the device is powered off
    • the manufacturing cost of the key storage is minimized

    In practice this means using some form of embedded non-volatile memory (eNVM). There are a number of different eNVM technologies commercially available, with different tradeoffs with respect to cost, programmability, compatibility with process technology and so on:

    Keys are typically programmed into the device once when it is manufactured (or at most a few times over the life of the device). Antifuse one-time programmable (OTP) memory is a good match for the above requirements. It does not require a special manufacturing process like flash, it cannot be read even using expensive equipment like electron microscopes, and it is, by definition, non-volatile. It is nearly impossible to determine which bits are programmed because it is difficult to locate the oxide breakdown using chemical etching or mechanical polishing and by looking at a cross-section or top view. Kilopass’s XPM OTP memories are security certified not just for commercial use but also military. It could not be successfully attacked by either passive or invasive approaches:

    OTP memory provides best-in-class security, can be manufactured in a normal process without extra mask steps, and is low-power. In short, a perfect match for IoT.

    The Kilopass product page is here.


    Will your next SoC fail because of power noise integrity in IP blocks?

    Will your next SoC fail because of power noise integrity in IP blocks?
    by Daniel Payne on 04-14-2015 at 5:00 pm

    By the time that your SoC comes back from the fab and you plugin it into a socket on a board for testing, it’s a little late in the cycle to start thinking about reliability concerns like: dynamic voltage drop, noise coupling, EM (Electro-Migration), self-heating, thermal analysis and ESD (Electro-Static Discharge). They say that an ounce of prevention is worth a pound of cure, and that maxim is quite true when it comes to power noise integrity issues for our SoC designs filled up with re-used IP blocks and subsystems.

    You could take a detailed, transistor-level approach of using a SPICE circuit simulator during the design and layout phases to measure the effects of power integrity, except that would mean you have to wait until your design has a clean LVS netlist and all of the IC layout is completed, which is just too late during the design cycle. There is a more elegant approach that uses reliability analysis throughout the entire design process, and ANSYS is one EDA vendor with tools and years of experience in this domain.

    Early Grid Weakness Analysis

    As soon as your IP block has a GDS II layout, then you can run an early Power and Ground (PG) grid analysis to help pinpoint any areas of the IC layout that have excessive resistance and high current drive. Feedback from this analysis allows the layout designer to start fixing the PG grid at the earliest point in the design.


    IP Power Integrity Sign-off Coverage

    Static IR Drop Analysis
    As an IP block is powered up current starts to flow in the network, eventually reaching power and ground nets. The resistance of the PG nets multiplied by the current flowing creates a voltage drop, as ohms law states: V = I*R. When VDD levels lower, so does the noise margin, so it’s important to analyze each IP block with a static IR drop analysis, then inspect the grid for any hotspots identified, fixing the issue by adding vias, contacts or widening PG nets to lower the resistance.

    Dynamic Voltage Drop Analysis
    The static IR drop analysis doesn’t take into account any of the dynamic switching nature of all circuits, so a dynamic voltage drop analysis adds further reliability coverage under switching conditions. Feedback from such an analysis helps the designer to add or even reduce metal straps, vias, contacts and interconnect widths.

    Related – Noise & Reliability of FinFET Designs Need Smart & Proven Methodologies – Success Stories

    Substrate Noise Analysis
    Digital circuits that switch simultaneously can actually inject currents into the substrate, affecting the electrical performance of nearby, sensitive analog circuits, creating computational errors and degrading performance. With substrate noise analysis you can get an idea of where the noise is coming from, and how effective your layout isolation techniques are. Running this type of analysis during assembly of IP blocks will ensure that each AMS block is well isolated from noise sources.

    Related – How Early Do You Analyze Substrate Noise in SoC Design?

    EM Analysis
    There is a type of failure in aluminum or copper interconnect where the current density becomes so great that the atomic structure of the interconnect becomes altered enough to literally thin or narrow the interconnect, thereby greatly increasing the resistance and reliability of the interconnect. An increased current in the interconnect causes localized heating. With EM analysis the designer can see if their IP is over-design or under-designed for these high current effects. Both PG and signal interconnect should be run through EM analysis throughout the design process.


    EM Sign-off Flow

    Thermal Analysis

    With new transistor technology like FinFET the current drive is higher than planar devices per unit area which quickly leads to an increased thermal impact, even wires and vias can fail under high temperatures after enough cycles. You will want a thermal analysis tool capable of computing the actual thermal gradient on each IP block and then recalculate EM limits on the wires and vias.


    Thermal Integrity Coverage

    Related – FinFET Designs Need Early Reliability Analysis

    ESD Analysis
    It used to be that just IO pads needed ESD protection and analysis, however at the 65 nm and lower nodes our IP blocks need ESD checks to avert device breakdown, signals melting and cross-domain issues. DRC checks are no longer sufficient for ESD analysis, so a simulation-based approach is more thorough and trusted. The physical factors that require ESD analysis are increased resistance for interconnect, higher current densities and decreased oxide thickness for transistors.


    ESD Integrity Coverage

    Related – SoCs More Vulnerable to ESD at Lower Nodes – Must Resolve

    The specific EDA tools offered by ANSYS to help you handle power noise integrity are called RedHawkand Totem:


    IP sign-off for power noise integrity

    By running these analysis tools on each IP block, during chip assembly and before tape-out, you can validate that your SoC will work when it comes back from the fab.


    IP Integration Validation at SoC Level

    Summary

    Power noise integrity is important to the reliable operation of your IP-based SoC designs, and by using the methodology described it will ensure that your next project works to spec without surprises.


    GPP, GPU or Embedded Vision Dedicated Processor?

    GPP, GPU or Embedded Vision Dedicated Processor?
    by Eric Esteve on 04-14-2015 at 9:04 am

    Before answering the question we should try to define what is behind “Vision”, which type of applications and evaluate this heterogeneous market weight. Embedded Vision (EV) is the use of computer vision in embedded systems to interpret meaning from image or video. In fact vision processing requires a lot of maths functions that General Purpose Processor (GPP) doesn’t support. Gaming is probably the oldest vision application, at least as old as the PC. Graphic Processor Unit (GPU) has been initially developed to support gaming and GPU is dedicated to vision, no doubt about it. But traditional GPU are burning a lot of power and require a large amount of (external) memory, again burning extra power. Even if GPU is well tailored for game console or PC gaming, GPU cost and power dissipation is prohibitive for the many innovative applications like surveillance, home automation, and gesture recognition or object detection. Just add automotive related applications to support driver comfort or security and you realize why embedded vision systems is an exploding market expected to weight up to $300 billion in 2020 – exhibiting 35% CAGR!

    Synopsys has developed a multicore architecture optimized for vision processing, “DesignWare Embedded Vision Processor”, based on a x2 (EV52) or x4 32-bit RISC CPU (EV54) plus an object detection engine implementing convolutional neural network (CNN). OpenCV (and OpenVX libraries) is a very popular software programming environment used to develop vision related systems, speeding application SW development. Synopsys EV offer performance as high as 1000 GOPS per Watt and the IP vendor claims 5x better power efficiency than existing vision solutions from competition.


    On-chip memory, from 512 KB to 4 MB, can be shared between the object detection engine and the RISC CPU cores. The number of Processing Element (PE on the right side) is programmable to run Convolutional Neural Network (CNN). The designer can run the PE at up to 500 MHz (on 28nm) when the CPU can go up to 1 GHz, still on 28 nm. If we take as an example of face detection function with 720p resolution implemented on a 28HPM technology running at 500 MHz (@ 30 fps), the power consumption is kept low at 175 mW and the footprint including processor and memories very small at 2.6 mm[SUP]2[/SUP].

    If we zoom into the object detection engine, the PE count is configurable from 2 to 8, as well as the Streaming Interconnect, in fact a sophisticated crossbar, dedicated to this function and able to be dynamically changed. As we can see the Synopsys EV IP is far much than a CPU core as it includes two (four) RISC CPU, two to height configurable PE connected together via a reconfigurable streaming interconnect, plus a DMA and shared memories (from 512 KB to 4 MB). In fact this EV solution is a complete subsystem, optimized for embedded vision. This subsystem has been designed for use with a host processor:

    • Portion of the EV processor memory space is visible to host
    • Host and DesignWare EV synchronization is guaranteed through message passing.

    For early software development the designer may use available virtual platform models and HAPS FPGA based platform support hardware prototyping. Last point: DesignWare EV is host agnostic and the subsystem interface is industry standard AXI 64-bit.

    Availability and Resources
    EV52 (2 cores version) and EV54 prototypes will be available by May 1[SUP]st[/SUP] 2015.
    You may download EV52 and EV54 datasheet.

    From Eric Esteve from IPNEST


    Grenoble Comes to San Francisco

    Grenoble Comes to San Francisco
    by Paul McLellan on 04-14-2015 at 7:00 am

    The headquarters of ST Microelectronics is officially in Switzerland, but in many ways the center of gravity is in the Grenoble area. You may have heard of Crolles where ST does process development, manufacturing and more, which is about ten miles north-east of the city. As a result, along with the CEA-LETI and Grenoble Institute of Technology (INP) being there, an ecosystem of small and medium sized companies (SME) in electronics and semiconductor has grown up. These are pulled together into an umbrella organization called Minalogic that stands for Micro-Nanotechnologies et Logiciel Grenoble-Isère Compétitivité which is obviously in French but the only word that may not be obvious is “logiciel” which is the French for software.

    With 300 members, the Minalogic competitive cluster in Grenoble brings together major corporations, small and mid-sized businesses, universities and research centers, government agencies and organizations and investors from the public and private sectors. Minalogic’s goal is to foster research-led innovation in intelligent miniaturized products and solutions for industry. Since the creation of Minalogic in 2005, 388 projects have been certified and financed for total funding of €765 million (local and national funding), with a total R&D budget of €2 billion.

    Why am I telling you all this? Because Minalogic is coming to DAC. The president of Minalogic is Philippe Magarshack who has been in a number of positions at ST over the years driving their CAD and technology strategies. Currently he is formally EVP and CTO of embedded processing solutions.

    The “excuse” for the party is the celebrate the launch of Silicon Impulse, the new design center hosted by CEA-LETI to facilitate access to advanced technologies such as FD-SOI. The focus is very much Internet of Things (IoT) as Leti’s CEO Marie-Noëlle Semeria said when it was announced:With Silicon Impulse’s one-stop-shop platform, 28nm FD-SOI heterogeneous, low-power design becomes a reality for the IoT community. Silicon Impulse helps Leti’s partners introduce innovative products that deliver optimal performance for these applications, and benefit from the most advanced technologies. The center combines Leti’s large portfolio of leading-edge technologies and novel low-power design solutions with a unique service for speeding integration of FD-SOI and other more advanced technologies (ReRAM, MEMS, 3DVLSI, Silicon Photonics), enabling heterogeneous low-power co-integration.

    The EDA Showcase France will be held at the W Hotel at 4pm on Monday 8th June. That is just across the street from the Moscone Center where you will probably already be. You will be able to meet many of the Grenoble companies since they are exhibiting at DAC: Asygn, Defacto Technologies, Docea Power, Edxact, Infiniscale, IROCtech, Magilem, Xyalis.

    There will be an introductory speech by Philippe, a presentation by Jan Rabaey of UC Berkeley, the Berkeley Wireless Research Center, and more, on Design Trends and EDA Challenges from Connected Objects to Cloud Computing, followed by a networking reception until 6pm.

    So save the date. And if you decide to go then there will be an opportunity to RSVP nearer the time. The Minalogic website is here (in English as well as French).


    Advances in Nanometer Analog and Mixed Signal Design!

    Advances in Nanometer Analog and Mixed Signal Design!
    by Daniel Nenni on 04-13-2015 at 10:00 pm

    Mentor’s annual user group meeting at the Doubletree Hotel in San Jose, CA is coming up on Tuesday, April 21[SUP]st[/SUP]. This complementary event provides a unique opportunity to share design techniques and exchange ideas with other users and experts in the design community. As you may have read I am the star of the show; moderating a panel on The Changing Foundry Landscape: Trends and Challenges. But there are some other events there that should be of interest. And did I mention this is a complementary event with complementary food and complementary drink? The Doubletree puts out a nice spread, absolutely.

    Featured Sessions – Analog/Mixed-Signal (AMS) Verification Track

    Advances in Nanometer Analog/RF/Mixed-Signal Verification
    Presented by: Ravi Subramanian Ph.D., General Manager – AMS Verification, Mentor Graphics
    The analog, mixed-signal, and RF (AMS/RF) content of semiconductors is growing faster than any time in history, and is at the center of the semiconductor industry’s next major cycle. This wave is largely being driven by the rise of nanometer mixed-signal application specific standard products (ASSPs) targeted at new consumer, mobile, automotive, IoT, and datacenter applications. Performance targets for PLLs, ADCs, I/O circuits, PHY transceivers, image sensors, and embedded memories are becoming more stringent in the presence of higher device noise, lower supply voltages, less predictable process corners, and ever-increasing parasitics. This talk introducesimportant proven new approaches that have been successfully deployed in leading design teams to help analyze a variety of physical and electrical effects- spanning parasitics, coupling, noise, distortion, variability, power etc. – via innovative circuit analysis techniques. Specific case studies will be shown to illustrate the approaches used to target specific problems, and the underlying technology to help achieve this success.

    Remember, Ravi was CEO of Berkeley Design Automation when they were acquired by Mentor. Ravi is very approachable and you will not meet another semiconductor executive with more hours logged in front of customers and partners. I worked closely with Ravi managing the strategic foundry relationships for BDA up until the acquisition so yes I know this by experience.

    Device Noise Analysis of Precision Analog Circuits with the Analog FastSPICE Platform
    Presented by: Dr. Boris Murmann, Associate Professor, Stanford University
    Device noise, including thermal and flicker noise, are significant limiters on the performance of precision analog circuits and in particular in switched-capacitor circuits found in CMOS mixed-signal ICs. This paper reviews design challenges with these circuits from a theoretical perspective and provides best practices and simulation examples using the Mentor Analog FastSPICE (AFS) Platform. AFS delivers foundry-certified SPICE accuracy with industry-leading performance and full-spectrum device noise analysis. Circuits discussed range from track-and-hold circuits, to integrators, to SC delta-sigma ADCs.

    I have worked with Boris before and will see him again at the EDPS Workshop panel I’m chairing next week: FinFET vs FDSOI – Which is the Right One for Your Design? It is all about WHO you know in this industry and you should definitely get to know Boris.

    Design and Circuit Verification Challenges of Inter-Die Interfaces for 2.5D/3D IC Architectures
    Presented by: Miguel Miranda Corbalan Ph.D., Staff Engineer, Qualcomm
    One of the key challenges in 2.5D/3D IC architectures is the high-speed I/O interface between multiple dies/tiers. The design and characterization of these interfaces have significant circuit verification accuracy and performance requirement in order to achieve data rate and signal integrity specifications on par with single die SoC implementations. Challenges over and above single die implementations include power and signal integrity control over off-die interconnect structures. This paper describes the nanometer circuit verification requirements for power and signal integrity verification of a high-speed two die system. We present results from the methodology deployed at Qualcomm® using the Analog FastSPICE™ Platform from Mentor Graphics®. This methodology resulted in SPICE accurate simulation results, validated versus silicon measurements, with 4x speed-up compared to a traditional multi-threaded SPICE simulator enabling the successful verification and optimization of the high-speed I/O architecture.

    I have not met Miguel but we are connected on LinkedIn so he has that going for him.

    Verification of Mixed-Signal Interaction of Analog-Centric ICs
    Presented by: Senthil Vinayagam, Principal Design Engineer, Cobham Semiconductor Solutions
    Analog-Centric ICs used in automotive, aerospace, defense, and medical applications have become more and more mixed-signal designs with analog functions strongly coupled with complex digital control logic. The interaction between analog and digital blocks in these ICs can include feedback loops that pose a significant verification challenge. Traditional methodologies, verifying separately the functionality of the analog and digital blocks, is no longer sufficient and may leave hidden-bugs in the IC undetected, resulting in silicon re-spins. This presentation describes the verification of an ADC, designed for high-reliability telecommunication and imaging applications, using the Eldo circuit simulator and Questa ADMS mixed-signal simulator from Mentor Graphics.

    I do not know Senthil nor are we LinkedIn but you can count me in on anything automotive or medical.

    Featured Keynotes

    • “Secure Silicon: Enabler for the Internet of Things”

    Presented by: Wally Rhines, Chairman & CEO, Mentor Graphics

    • “Mega Trends Driving Architectures of Mobile Computing and IoT devices”

    Presented by: Karim Arabi, VP of Engineering, Qualcomm

    View Agenda
    andRegister Today!


    Silicon Valley, It’s About Culture

    Silicon Valley, It’s About Culture
    by Arthur Hanson on 04-13-2015 at 4:00 pm

    Spreading the culture of Silicon Valley is the best way to take the US and the world to a better place. This culture is already spreading, but many organizations and especially governments are doing everything possible to hold on to the past to the detriment of all. Silicon Valley/US businesses have made the strongest statement that can be made by keeping and investing money overseas ($2.1 Trillion). This is the ultimate indictment of a government that displays a gross contempt as steward of the people’s money and resources. It is not only taxes, but regulations and laws that act as taxes that are becoming a detriment to the lead Silicon Valley has given the US.

    Silicon Valley embraces the best and fastest way of moving ahead and improving our lives with the least downside of any other competing cultures. The points below are the basic framework that makes Silicon Valley and the tech sector tick. It isn’t just about silicon, it’s about the business and social process that has extended its reach into every corner of our society.

    Efficiency is the number one factor driving our tech sector. Less power, materials, space, waste, cost, time and especially less holding on to the past. Automation of everything from design to manufacturing takes this a step further. Delivering more is the next factor: More performance, functions, intuitiveness, migration paths and above all better results.

    What makes much of this possible is more acceptance of failure as when venture capital firms are willing to accept an eighty plus percent failure rate. It is the ability of Silicon Valley to manage and work with failure in a way that creates some of the greatest displays of the human spirit to reach even higher each time they exercise it. This allows more dreams to be put into action and as a result far more dreams become reality. This culture has taken systems that were large, expensive and limited to powerful, inexpensive commodities that fit in the palm of our hand.

    The Silicon Valley culture of labor flexibility, transparency, allowing failure, ever increasing performance at ever lower cost at an accelerating rate now needs to be taken to government if the US and many other countries are to survive and prosper in a world with ever increasing challenges. We can no longer afford a government that literally becomes more wasteful and failure oriented. Almost iron clad security for employees and departments combined with a lack of transparency has led to increasing waste, inefficiency, failure and sometimes, sadly, even outright fraud. Bureaucracy in many cases has increasingly become more about survival and growth than results. It’s time to demand value and results by having our government adopt the Silicon Valley culture of accelerating value and results at ever lower costs.

    Transparency is the foundation of integrity, honesty and fairness and no tool empowers this like the net, even when the government has made every effort to hide its actions from the people. Our government’s failure has led to the world’s most expensive medical, education, military, and legal system with as many people in prison as China and Russia combined. We can’t be that evil can we?

    The top companies in Silicon Valley have made the most important vote of and all and that’s with their money. No business person would ever want to give money to an organization that wastes eighty percent of it. They fully know this is a detriment to the health of society and much of the accelerating government spending and waste is actually detrimental to society. The government’s answer to almost any problem is to throw money at it even as their actions cause the problem to go from bad to worse.

    It’s no wonder Silicon Valley doesn’t want to feed this government culture that is becoming more and more about failure, waste, inefficiency and most sadly of all, outright corruption. No system is perfect, but our government has been in a state of decline for years.

    The Silicon Valley culture is by far its most important product and it is far too important to mankind to leave its spread to chance. Only rarely is the government able to manage failure in a positive manner and this is the most valuable process the government should learn from Silicon Valley. The greatest document ever written is our Constitution and Silicon Valley is our greatest hope to embody the ultimate potential of the human spirit. US businesses have made a 2.1 trillion dollar vote for what the best and brightest think our government should do. This is not only about money, but lives and bringing out the very best in our people. It’s not about taxes, it’s about value and a government that isn’t delivering. At one time our government served the people, it’s time for the government to again serve the people and not itself. It’s a simple decision between the culture of Washington and the culture of Silicon Valley.

    I feel these links should be a reference of what’s possible, just one of many solutions and what our true debt actually is.

    http://www.forbes.com/sites/johngoodman/2015/03/31/singapore-a-fascinating-alternative-to-the-welfare-state/2/

    reference link http://www.usdebtclock.org/