Banner 800x100 0810

FPGAs – The Possibilities are Endless – Almost

FPGAs – The Possibilities are Endless – Almost
by Luke Miller on 04-26-2013 at 8:00 pm

Has your wife ever said “Your name, I’m not a computer”? Well maybe mine has. I know what you are thinking… This guy is married? Yup, I over achieved too. Have child #7 on the way Lord willing, so you probably guessed I don’t follow much of the world’s planning and such. Like you, no one in my house really understands what I do, nor cares much. Hey they are like middle management; get ready for your yearly review where acronyms’ are king, in my last review I was told I was not visible enough, so I guess I need to eat more. OK I need to stop. While humans have what would seem like infinite possibilities, and women are multithreaded and operate in a non-binary way, I look to the possibilities of the finite, non-personal FPGA for some amazement. My bumper sticker says “I break for HLS” (I stole that from my Xilinx Buddy)

Every time I test a new bit image on a new device and the FPGA passes the smoke test; done light is on and the math working, I think wow; I can’t believe this is working. Now, it is not because I’m that bad of a designer, I hide that well. I’m just in awe of all the things that have to work just to make my little algorithm crank away. Don’t get me wrong, it is not like watching a child being born, or even a seed popping thru the soil in my garden but the sheer magnitude of all the collective efforts around the world to get a FPGA on a board that works is simply amazing. From the Fab lines, to node characterization, IO design, Hard IP’s, 3[SUP]rd[/SUP] party tools to aid the layout, DRCs, parasitic modeling, place and route 10 times, I could keep going. The configuration scheme alone I’m sure is years’ worth of work. Inside of that wonder square of gates is billions of transistors, and you know what? They work! And not only that, they work for a long time, they are reliable. Did I forget to mention all the high speed, 20+ layer board design, the micro switching power supplies? I would have to say the demo board that I program, easily must of had 1000’s of paws helping out so I could make a design a reality. Now I know I missed a whole bunch in there so don’t get nervous, I know you helped too and if you’re dead wood you at least faked it, by the way, you are fooling no one, can you say sequester.

The largest Virtex-7 has a configuration bit stream of 293,892,224 bits. That’s a lot. Many, many possibilities. Now don’t get technical on me, let’s just say it’s the full 293,892,224 bits, and that could be 2^293,892,224 different designs. I wonder how long it would take to find my bit stream match for a beamformer design. That is a neat thought. Too bad it would take a billion years to find out but the idea is you design for expects not the function. I have always thought of the FPGA as a player piano. The bit stream is the music roll and we make the music. Now that we have bounded the FPGA’s possibilities and we see they are finite, but huge, does anyone know the maximum possibilities for a CPU? It is not infinite, can’t be, assume fixed clock speed. That question brings up two more thoughts, which are there is no such thing as random and infinite. Yes in theory they exist like helpful people at a help desk but you cannot find them in practice. Roll some dice, they obeyed physics, sum up all the matter in the universe divide by Planck mass and that’s all the smallest parts possible, not infinite. Mind boggling isn’t it? OK go program an FPGA.

lang: en_US


Mentor CEO Wally Rhines U2U Keynote

Mentor CEO Wally Rhines U2U Keynote
by Daniel Nenni on 04-26-2013 at 2:00 pm

You will never meet a more approachable CEO in the semiconductor ecosystem than Dr. Walden C. Rhines. The first time I met Wally was way back when I blogged for food and he invited me over for lunch. Even better, a year or two later I was having dinner with a friend at the DBL Tree in San Jose. Wally was waiting for his flight home so he joined us for a glass of wine and an impromptu industry discussion.

At the Mentor U2U conference today Wally did a replay of the presentation he did at the GlobalPress Electronics Summit which Paul McLellan blogged about HERE. Since I’m more of a foundry person let me comment on a different part of his presentation.

Wally pointed out that when I started in this business almost 30 years ago semiconductor companies had their own fabs and could more accurately measure designs based on performance, power, area, AND manufacturability. With the emergence of independent semiconductor foundries this all changed. Manufacturing cost (yield) was a wedge between design and manufacturing. Fabless semiconductor companies emerged and pounded on the foundry doors begging for capacity for products that would have the best PPA (performance, power, area). The foundries wanted products that were manufacturable with high yield (low cost). It all came down to the choice of design rules: Should the design rule manuals (DRMs) be more accommodating to design with aggressive rules? Or should they be guard banded to allow for manufacturing variability?

First the foundries offered manufacturing centric DRMs with minimum design rules that had to be followed. As foundry competition emerged fabless companies had more choices and demanded more design oriented DRMs for better PPA. At 1.3m (from what I remember) the foundries compromised and introduced the concept of recommended design rules. The minimum rules were more design oriented (tight spacing) while the recommended (optional) rules were manufacturing oriented (larger spacing). Naturally the fabless designers did NOT use the recommended rules since they were not PPA focused, especially the ones purchasing good die versus wafers.

Notice the resemblance to Wally! 😉

This arrangement broke at 40nm which resulted in an extended yield ramp and painful market delays. At 28nm recommended rules were done away with and more restrictive design rules were implemented. As a result, 28nm ramped in record time and will be the most successful process node we will ever see (my opinion). As the slides from my EDPS Keynote show, the more restrictive DRMs and DRC decks are, the larger and more complex they become.

This transition will continue to require better EDA tools for designers and fabs to manage this ongoing collaboration and resulting complexity. One example Wally used was the Calibre PERC product which we recently blogged about HERE. This transition will also require closer collaboration between the fabless companies, EDA companies, IP companies, and the foundries. CEO’s like Wally Rhines and conferences like U2U, DAC, Arm TechCon, and TSMC OIP are critical to our survival so I ask all executives in the fabless semiconductor ecosystem to please allocate budget and send your best people out to make sure we all thrive in the coming process nodes.

lang: en_US


When installing a sink, it’s a lot faster to buy a saw

When installing a sink, it’s a lot faster to buy a saw
by Don Dingee on 04-25-2013 at 8:10 pm

Mentor’s announcement from Design West this week pretty much signals the end of standalone ESL tools, in favor of more useful stuff. They have pulled the pieces of their Sourcery CodeBench environment along with their embedded Linux offering and their Vista virtual prototyping platform into a native embedded software development environment.

Continue reading “When installing a sink, it’s a lot faster to buy a saw”


Morris Chang on Altera and Intel

Morris Chang on Altera and Intel
by Daniel Nenni on 04-25-2013 at 7:00 pm


If you want to know why I have written so much about TSMC in the past five years here it is: TSMC executives are approachable, personable, answer questions straight on, and have yet to lead me astray. If you want an example of this read the Chairman’s comments on the TSMC Q1 2013 earnings call transcript.

“On 16-nanometer FinFET, we have said several times that this is a change in cadence in our new technology introduction. It used to be 2 years per node and in the case of 16-nanometers FinFET, it follows just 1 year, by 1 year, the 20 SoC. So it is a quickening of cadence and that is because of market request, market requirements, customers’ requests.”

Call it Taiwan culture, or maybe that TSMC executives are highly technical people (experts in their fields), as a result, the flow of information is excellent for people who know what questions to ask. I’m not talking about press releases that professional PR people do for them with PR speak. I’m talking about unscripted Q&A sessions like the ones in the conference calls.

“The second point I want to make is that we have been collaborating with our customers and ecosystem partners for more than 15 years. Through the ecosystem OIP, TSMC’s technology has been collaboratively optimized for SoC development.”

My favorite Morris Chang story is when I saw him at the Royal Hotel in Hsinchu last year. I came in the same time he did and he beat me up the three flights of stairs to the lobby. Not kidding. This man has me beat by 30 years and 3 steps. I’m training on a Stairmaster now so I will be ready for him next time!

“CapEx will be between $9.5 billion and $10 billion this year. This is an increase from the last guidance we gave, which was about $9 billion. Basically, we have stepped up the preparation for the ramp-up of 20-nanometer and 16-nanometer. We have pulled some of the capital in because we want to be — to have as high yields as possible when we do start ramp-up, volume ramp-up. And of course, we are continuing to build up 28-nanometer capacity. Therefore, approximately 90% of the capital expenditures are for 28-nanometer, 20-nanometer, 16-nanometer, both building facility and equipment. Another 5% is for R&D and that’s mainly for 10-nanometer, 7-nanometer, et cetera.”

The best part of the call was in the Q&A with a question about Altera moving to Intel. Generally speaking the analyst questions are pretty dull but every once in a while they come up with a good one.

“I very much regret Altera’s decision to work on the 14-nanometer with Intel even though the financial impact is relatively small and Altera remains a major and valued partner of TSMC’s. We have gained many customers in the last few years but I really hate to lose even a part of an old one. We want them all, really. I regret it and because of this, we have thoroughly critiqued ourselves. If there was a thing like an investigative commission on what happened, we had it. And there were, in fact, many reasons why it happened and we have taken them to heart. And it’s a lesson to us and I don’t think that we — at least, we’ll try our very best not to let similar kinds of things happen again.”

In my opinion there was nothing TSMC could have done. Altera left TSMC because of Xilinx. Xilinx is a fierce competitor on all fronts: financial, marketing, sales, technology, ecosystem, etc… so there is no way Altera can outrun Xilinx on a level playing field. TSMC is open to all customers and does not do exclusive partnerships so Intel was a smart choice for Altera.

The question is: Can Intel be a good foundry partner for Altera? My guess is yes they can, as long as the new Intel CEO is on board with it and Altera does not need ARM (ARM and Intel do NOT mix). Not great news for Intel’s other FPGA partners though (Achronix and Tabula). They must really be steaming over the “exclusive” Altera deal!

lang: en_US


Best Practice for RTL Power Design for Mobile

Best Practice for RTL Power Design for Mobile
by Paul McLellan on 04-25-2013 at 11:54 am

Mobile devices are taking over the world. If you want lots of graphs and data then look at Mary Meeker’s presentation that I blogged about earlier this week. The graph on the right is just one datapoint, showing that mobile access to the internet is probably up to about 15% now from a standing start 5 years ago.

Of course, one obvious thing about mobile devices is that they run on batteries. Although there is slow steady improvement in battery technology, nobody is predicting any imminent breakthrough so if we want our batteries to last then we have to do that by getting the power consumed in the SoCs in the phone/tablet down. Or at the very least, not letting in increase. For very high volume devices there are some big discontinuous changes on the horizon such as 20nm FinFET or FD-SOI, both of which have considerably lower power than the previous generation of 28nm planar (which doesn’t have great leakage characteristics).

So when do you decide to do power analysis? One answer is all the time. But realistically, above the RTL level the design is just a block diagram without any detail for any blocks that don’t yet exist (IP blocks may be well-characterized). Below the RTL level, gates or even lower, there is very accurate data available (especially post-layout) but it is too late to make anything other than minimal changes. So like Goldilock’s porridge, RTL is not too hot and not too cold, it is just right.

Any power analysis, except the most coarse, requires vectors that stimulate the design in a “typical” way so that you can measure the power. Also, since you want to get a good power network designed early, you also need to find corner-case vectors that have the big swings in current that might drain the decaps or cause noise spikes or cause unacceptable voltage droop. So out of perhaps millions of available vectors, only a tiny percentage are needed to get good analysis done.

Design is not a static process. So once a strategy for keeping power under control has been agreed, regressions are necessary to make sure that as the design progresses that no surprises occur and suddenly increase the power. It is always easier to fix a bad change to a design just after it has been made rather than when you are about to tapeout.


So the basic flow is to start by making design tradeoffs. Next, power vectors need to be profiled in the various different operating modes that the design might be in (playing mp3, transmitting, receiving, watching video…). The power can then be checked against the budges and any hotspots identified. These can then be prioritized, deciding which anomalies are likely to have the biggest “bang for the buck” when fixed. Using automated tools, perhaps along with some manual and even embedded software work, the power can then be further optimized. And finally regressions created to make sure that the hard-won reductions don’t suddenly get lost.

Apache (you know they are a subsidiary of ANSYS don’t you!) have an webcast on best practice for RTL power. It is presented by Preeti Gupta who is director or RTL produt management. Here is the link for the webcast. It is 30 minutes long.


Bring high end camera image quality to smartphone

Bring high end camera image quality to smartphone
by Eric Esteve on 04-25-2013 at 9:13 am

We have to go back to 2008 to understand why Super Resolution is desperately needed by smartphone users, expecting to take high quality pictures with their smartphone, at least as good quality as with their camera. It’s in 2008 that smartphone worldwide shipments have surpassed standalone compact camera shipments… and we don’t expect this trend to reverse!

When you buy a smartphone, you probably don’t buy it first for the camera function, but you are happy to learn that you benefit from a 41M pixel sensor in your phone, and you think –like I do- that such a pixel count should provide top quality image. In fact, the CMOS sensor size (the chip size) in smartphone is growing smaller than in a camera so, when the number of pixel is growing, the pixel size becomes very small, and becomes more sensitive to noise and low light. Two other effects, simply based on geometry consideration, will again affect the image quality in smartphone compared with a camera, whatever the number of pixel your sensor is, as we will see with these two pictures:

The first effect is due to the distance between the sensor and the lens: as we can see, there is at least one order of magnitude difference, if no more, when comparing the smartphone with the high end camera.

The second difference is simply linked with the lens sizeitself. It’s only geometry, but optic laws are totally based on geometry. Even if it looks cruel, it’s a matter of fact and, since Euclide has issued the first rules and theorems, we can’t change these laws… But, that we can do is digitalizing the signal, then process it, using DSP algorithms, that’s the solution proposed by CEVA, called “Super Resolution”.

The principle looks obvious, like any great idea: instead of trying to take one high quality image with your smartphone (which is almost impossible due to the geometry), you take four images, with a low resolution sensor (say 5MPixel), and you process it, to finally generate high resolution image. As you can enhance the resolution by 2X by axis, you can generate up to 20 MPixel quality image, starting with your 5 MPixel sensor.
The process starts by enhancing image quality by:

  • Extracting image details
  • Reducing Noise and Luma & Chroma channels
  • Accurate Sharpenning
  • Ghost Blur Removal

Then you run the algorithm stages:

  • Course Registration
  • Fine registration
  • Image fusion, including ghost removal

Super Resolution is described in technical papers for a while, but based on iterative process requiring high bandwidth, up to 10, 000 operations by pixel. CEVA has greatly improved this complexity, down to less than 100 operations by pixel on a PC and finally, using CEVA MM3101 DSP core, up to 16 cycles/pixel. For example, in a 28nm process, the CEVA-MM3101 processor is able to take four 5MPixel images and fuse them into a single high-resolution 20MPixel image in a fraction of a second, while consuming less than 30mW.

Jeff Bier, founder of the Embedded Vision Alliance (www.Embedded-Vision.com), commented: “Smartphones are the most commonly used devices for capturing still images and video, but the slim form factor of these devices places severe limitations on the quality of captured images. CEVA’s Super Resolution algorithm, coupled with the CEVA-MM3101 imaging and vision processor, is an excellent example of how clever computer vision algorithms can be combined with optimized processor architectures to overcome physical limitations of imaging systems.”

Comparing image quality on a PC is not an easy task, but we can understand, from the above picture, that CEVA SR looks better than a PC application, and far better than Bi-cubic. And we agree with Eran Briman, vice president of marketing at CEVA, commenting: “Our new Super Resolution algorithm for the CEVA-MM3101 platform marks the first time that this technology is available in software for embedded applications. It is a testament to both the expertise of our highly skilled software engineers and to the low power capabilities of our CEVA-MM3101 platform, which comprises the hardware platform together with optimized algorithms, software components, kernel libraries, software multimedia framework and a complete development environment. We continue to lead the industry in the embedded imaging and vision domain and the addition of this latest high performance software component to our platform furthers illustrates the strength of our IP portfolio for advanced multimedia applications.”

The CEVA-MM3101 offers SoC designers an unrivalled IP platform for integrating advanced imaging and vision capabilities into any device. Coupled with CEVA’s internally developed computational photography and imaging expert algorithms such as dynamic range correction (DRC), color enhancement, digital image stabilizer and now super resolution, CEVA’s customers are equipped with a full development platform for image enhancement and computer vision applications for any end market, including mobile, home and automotive.

For more information, visit www.ceva-dsp.com/CEVA-MM3101.html.

Eric Esteve from IPNEST

lang: en_US


10 to 100X faster HDL Simulation Speeds

10 to 100X faster HDL Simulation Speeds
by Daniel Payne on 04-24-2013 at 10:44 am

Speed, capacity, accuracy – these are the three major EDA tool metrics that we pay attention to and that enable us to design and verify an SoC. Talk to any design or verification engineer and ask if they are satisfied with the time that it takes to simulate their latest design, or to verify that it meets spec and is functionally correct. That answer that you hear is, “No, I’m not satisfied, simulation of my RTL takes way too long.”

The EDA industry has responded to this challenge with several verification approaches:

  • HDL simulators – powerful debugging capabilities, good signal visibility, moderate cost, too slow
  • Emulators – faster speeds than HDL simulation, pricey, lack of signal visibility
  • FPGA Prototyping – faster speeds then HDL simulation, moderate cost, unconnected with HDL simulator

In 2011 the engineers at Aldec came up with an approach that combines an HDL simulator with an FPGA-based prototyping board, dubbed the HES XCELL. So a design or verification engineer can now use a familiar HDL simulator with debugging features, connected to an FPGA prototyping board to get a 10X to 100X speed up over just using an HDL simulator.

With this accelerated simulation approach the engineer continues to use a familiar HDL simulator to control the simulation and see results in the waveform viewer, while the actual design is simulating on the FPGA hardware to provide the speed up. You still determine what simulates in the HDL simulator versus the prototype board, so a design engineer can place new blocks in the HDL simulator and re-used blocks in hardware. As your design work is completed, you would place only your testbench in the HDL simulator, while the Design Under Test (DUT) is placed in the hardware:

8051 Example
The popular 8051 core and testbench were simulated in Aldec’s HDL simulator, called Riviera-PRO:

The 8051 core was using 97.08% of the CPU, while the testbench was using only 1.29% of the CPU. If we placed the 8051 core in hardware, instead of running in the HDL simulator, then a significant time speed-up can be had. Here’s a chart of the speed-up factor that you can expect with this accelerated simulation approach:

Getting Your Design Into Hardware
There are five steps to get your RTL design into hardware for the accelerated simulation:

[LIST=1]

  • Import your compiled HDL using the Design Verification Manager
  • Configure your design for debugging
  • Use logic synthesis on the blocks that will be accelerated into hardware
  • Partition your design between HDL-based simulation and accelerated simulation
  • Place and route your design for use in the FPGA

    Aldec uses an FPGA prototyping board with Xilinix Virtex 5 parts, you can also use boards from the DINI Group or Synopsys HAPS.

    Summary
    The approach of accelerated simulation has been shown to speed up HDL simulation results by a factor of 10X to 100X, allowing you to complete your verification quicker, while providing full debugging like in a traditional software-based HDL simulator.

    Further Reading

    White Paper: Simulation Acceleration with HES XCELL

    *lang: en_US


  • ESD – Key issue for IC reliability, how to prevent?

    ESD – Key issue for IC reliability, how to prevent?
    by Pawan Fangaria on 04-23-2013 at 8:30 pm

    It’s a common electrical rule that when large amount of charge gets accumulated, it tries to break any of its surrounding isolation. Although it wouldn’t have been prominent in 1980s or 90s, protection for ICs from such damaging effects is a must, specifically in large mixed-signal designs of today, working at different voltages and at lower process nodes where gate oxide can become extremely thin and breakdown voltage can become very low.

    ESD(Electrostatic Discharge) failure in CMOS ICs can be caused due to thermal breakdown on high transient current or dielectric breakdown in gate oxide due to high voltage. If the IC does not fail immediately, it will gradually degrade in performance. In order to protect ICs from ESD, protection circuitry must be applied across IOs and power lines.

    ESDA (ESD Association) puts the verification guidelines in three main steps- identifying the ESD vulnerable devices, verifying the implementation for ESD protection and checking completeness of each such device for its protection. SI2 also recommends a standard ESD protection design flow methodology at –
    http://www.si2.org/openeda.si2.org/project/showfiles.php?group_id=82&release_id=558

    The burgeoning complexity and size of nanometer designs have made it utmost important that a full proof automated system must be implemented for checking ESD conditions being created during manufacturing and making sure that protection is built around IO pads such that any large voltage spikes are dissipated before they reach thin oxide devices and any internal circuitry of the IC. Mentor has developed a novel solution for this important problem. Its tool, Calibre PERC checks the topology of the design for appropriate implementation of ESD structures and their placement with respect to devices to be protected and the core of the IC. Calibre PERC implements all 39 ESD checks,recommended by ESDA, such as layout checks, netlist checks, current density checks etc. to name a few important ones. The current density check ensures interconnect robustness.

    [ESD in metal interconnects]

    The ESD verification is done at multiple levels from Cell to Package, intra-power and inter- power domains. Calibre PERC identifies external device configurations as ESD protection structures.

    [Some commonly used ESD protection configurations]

    Calibre PERC can be programmed to perform other tasks such as parasitic extraction of metal interconnects, design rule checks, current density calculation, electrical compliance etc. A very important aspect of Calibre PERC is that it performs AERC (Advanced Electrical Rule Checks) which can identify signal lines between power domains which work at different voltages in a mixed-signal design and ask for additional ERC configurations between these domains.

    [P2P extraction, current density analysis and design rule checks on identified topologies]

    A nice description of ESD and its protection application in Calibre PERC is given in Mentor’s whitepaper at –
    Solving Electrostatic Discharge Design Issues with Calibre® PERC™

    Calibre PERC provides a comprehensive, integral and complete, automated, error free solution for checking and correcting ESD conditions resulting into increased yield, performance and reliability of ICs.


    Gigahertz FFT rates on a 500MHz budget

    Gigahertz FFT rates on a 500MHz budget
    by Don Dingee on 04-23-2013 at 8:30 pm

    A basic building block of any communication system today is the fast Fourier transform, or FFT. A big advantage of FPGA implementations of FFTs is they can be scaled and tuned for the task at hand, optimizing data flow, resource use, and power consumption. Scaled, that is, up to the clock speed of the FPGA – or so it would seem.

    Today’s systems often present a massive amount of very fast data at the front end that needs to be sampled and decimated quickly, typical of a system with a lot of data channels in play like satellite radio or cable head-end systems. Sample rates run into the gigahertz range, putting them outside the range of FPGA clock speeds if the constraint is one sample per clock.

    Parallelism comes to the rescue in a Synopsys FFT IP implementation. In state-of-the-art FPGAs, there is plenty of room to create parallel computational blocks, coordinating operations across multiple inputs. Instead of forcing a single block to run faster to keep up with data, a parallel approach allows data to be sampled and processed faster.

    The key to this is the Radix2 multipath delay commutator, a modular architecture which keeps the pipeline in sync between data elements. Flow control is implemented without a big timing penalty and reduction in throughput.

    The following chart illustrates a simple case of what parallelism can achieve on a relatively small FFT. When I asked him what the architecture is capable of, Chris Eddington mentioned this IP can do a 16k point FFT operating on 32 parallel inputs, but keep in mind there is some reduction in system clock frequency as more parallel channels are stitched together with flow control.

    Besides sampling faster, parallelism also decreases computational latency. The cost of this approach is of course area and multiplier utilization, but with 1120 to 3600 DSP slices in a Xilinx Virtex-7, this still fits very comfortably.

    Of course, both Altera and Xilinx have capable FFT IP blocks, but there are a few key differences in the Synopsys implementation beyond the parallelism features. The Synphony MC tools are vendor independent and can target various FPGAs. The flow integrates with MATLAB Simulink for designers who prefer to work in high level architecture. Not every designer is an FFT expert, so being able to operate in high level tools is a plus.

    The flow also instantiates RTL and System C, giving designers the flexibility and visibility needed to integrate the code into their system and tune things if necessary. Rather than being a “black box” implementation, this allows designers to simulate performance and power using other tools.

    Speaking of tuning and power, one other way to use a parallel approach would be to consume reasonably fast data in lower power. Instead of going for the peak sample rates possible, parallel channels would allow a good sample rate at a lower FPGA clock speed.

    You can read more insights on this FFT architecture from Chris and the Synopsys team in their article:
    Multi-Gigahertz FPGA Signal Processing”.

    lang: en_US


    CDN Live 2013 in Munich: what’s the next acquisition?

    CDN Live 2013 in Munich: what’s the next acquisition?
    by Eric Esteve on 04-23-2013 at 8:10 pm

    Going to Munich in May could be a very good idea, as it will give you the opportunity to listen to the keynote talk from Lip-Bu Tan. Who knows if you will learn in direct live the name of the next acquisition from Cadence in 2013, after Tensilica and Cosmic Circuits? In fact, there may not be new acquisition announcement, then this keynote talk should help understanding current Cadence’ strategy for developing IP business. Looking at the other keynotes talks gives some direction:

    • Industry keynote: Keith Klarke, VP Embedded Processor, ARM
    • Industry keynote: Rudi de Winter, CEO X-FAB

    Extracted from their web site: “X-FAB manufactures wafers for automotive, industrial, consumer, medical, and other applications on modular CMOS and BiCMOS processes in geometries ranging from 1.0 to 0.18 µm, and special BCD, SOI and MEMS long-lifetime processes.” X-FAB market positioning is not to compete head on with TSMC or GlobalFoundries on 28 nm technologies, but to serve growing market segments like automotive and industrial, requiring mature technology nodes supporting high voltage or/and high current.

    Keith Klarke, from ARM, will deliver the second keynote talk after Lip-Bu Tan. This is a good indication about the IP strategy that Cadence is building: Tensilica IP will not be used to compete with CPU or GPU IP cores from ARM, but rather could be integrated within ARM based architecture. We can imagine, for example, an integrated AP/BB chip implementing ARM Cortex big.LITTLE CPU architecture and LTE modem based on Dataplane customizable processor from Cadence/Tensilica imtegrated together. As Tensilica Dataplane also supports many other application like audio, voice and speech or image/video processing, to name a very few, there will be room for cooperation… and also for competition. By having ARM VP giving a “Industry Keynote” talk right after Cadence CEO is certainly a sign that Cadence prefers cooperation with ARM, which is by far the most realistic approach: the MIPS recent history has shown that the market don’t really need “another RISC CPU core family”, but rather a complementary offer. L

    ike I write in Semiwiki a couple of years ago, ARM ubiquity has been built on the long term, is now based on a 1000 partners Ecosystem and an essential customer installed base. Will such a status change in the near, or even far, future? Competing head on with ARM would require such an amount of energy, time and resource, that I don’t see Cadence (or Synopsys) to initiate such a battle. Staying partners is certainly the wiser solution!

    CDN-Live is a three day event, with an equivalent two days agenda, and covers from mixed-signal full custom, to digital, IP and Verification based design, up to PCB, signal integrity or power aware design. On May 8, Hardware/Software, as well as chip/package co-design and system-level design will be covered. That day will also include presentations made by users from Freescale, Renesas, ST-Microelectronics, ZMD, Infineon or Amkor. During the event there will be tutorials made by cadence and Academic presentations from various Universities in Europe. This should be a dense event!

    To Register to CDN-Live, just go here

    On my side I will focus on the Design and Verification IP tracks, with a special attention about Interface IP: PCIe and M-PHY, USB 3.0 PHY IP, Memory Models for Verification and DDR SDRAM Memory Controller and PHY IP. It will also be a good opportunity to learn about Tensilica Dataplane CPU and I am sure not to miss that track, as I will make the presentation just before, named “Interface IP protocols: the winners, the losers in 2012”. It will be strongly updated from the presentation made during IP-SoC last December in Grenoble, as many changes have occurred during Q1 2013! Because
    we can now take into account the 2012 actual IP sales results for the various protocols (DDRn, USB, PCIe, SATA, MIPI, Ethernet, Thunderbolt, HDMI, DP), it will be fresh information, in advance from the launch of the “Interface IP Survey”…

    Eric Esteve from IPNEST

    lang: en_US