RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

TSMC vs Intel vs Samsung FinFETs

TSMC vs Intel vs Samsung FinFETs
by Daniel Nenni on 06-08-2014 at 10:50 am

By definition the pure-play foundry business model separates the design and manufacturing of a semiconductor device. TSMC was the first dedicated (pure-play) foundry which enabled the incredible fabless semiconductor ecosystem we have today. If not for the fabless business model we would not have the supercomputer class mobile devices in our pockets. We, as integral parts of the fabless semiconductor ecosystem, have changed the world, absolutely.

So the question is: As an executive for a fabless semiconductor company, why would you even consider turning back the clock and renting fab space from an IDM (a company that both designs and manufactures semiconductor devices)?

As a student of history I have always felt that it is critical to understand how you got to where you are today in order to predict where you will be tomorrow. As a 30 year veteran of Silicon Valley I have seen many companies succeed but I have also seen many more fail due to one fundamental truth, they failed at the future. In writing “Fabless: The Transformation of the Semiconductor Industry” Paul McLellan and I both agree that the key to the fabless business model is competition. Unfortunately, at 28nm there was no real foundry competition and that has opened the foundry business doors to IDMs once again.

You can’t fault TSMC. They executed at 28nm and were rewarded with a dominant market position. Had GlobalFoundries been able to provide a competitive 28nm offering I would not be writing this blog. Trust me on this: fabless semiconductor companies would NOT even consider doing business with an IDM foundry if they had two or more leading edge pure-play foundry options.

Intel thinks they are clever by getting into the foundry business. Unfortunately Intel is being used as a pawn in a very high stakes game of foundry chess. I also believe this to be true in the SoC business but that is another blog. Samsung on the other hand is a chess grandmaster which puts Intel between a serious rock (Samsung) and a very hard place (TSMC). Take a close look at Samsung’s latest announcements:

Samsung ♥ GLOBALFOUNDRIES

Samsung Endorses FD-SOI!

Both are excellent moves in becoming an integral part of the fabless semiconductor ecosystem. Samsung is also the only foundry to show working 14nm Silicon at the Design Automation Conference last week. My sources tell me that Samsung 14nm is 3-6 months ahead of TSMC’s 16FF+. My sources also tell me that TSMC 16nm FF+ is today the most competitive FinFET offering, meaning power, performance, area, AND cost. This is based on information from the associated PDKs and not from PowerPoint slides or press releases.

Competition is what drives the fabless semiconductor ecosystem and I thank Intel and Samsung for the investments they have made. If not for that competitive pressure we would not have the ultra-aggressive FinFET process development schedule nor would we have the competitive wafer pricing I have seen of late. Unfortunately all FinFET processes are not created equal so it will be difficult for the fabless companies to design to multiple foundries which mean there will be clear winners and losers in this game. If this was a horse race and I had to make a bet today it would be TSMC to win, Samsung to place, and Intel will not even show. Just my opinion of course.

More Articles by Daniel Nenni…..


Hogan’s Internet of Things Panel

Hogan’s Internet of Things Panel
by Paul McLellan on 06-06-2014 at 10:50 am

Jim Hogan organized a panel on the Internet of Things (IoT) on Wednesday afternoon. The panellists were Randy Smith of Sonics, Bernard Murphy of Atrenta, Gary Smith (himself) and Frank Shirrmeister of Cadence.

Gary reckons that IoT is a Wall Street buzzword being used to get stock prices up. If you go to a series of presentations of IoT you will realize that they are all talking about something different. So IoT is not a market, medical is a market, mil-aero is a market, automotive is a market and so on.

Frank is skeptical that it will fill fabs since typically these are not large expensive chips. He sees it as hierarchical, with the sensor devices with some compute power linked to a hub (cellphone, computer or living room) and then uplink from there to the cloud. Cloud services are an important part of IoT.

Bernard beat the drum for security. IoT represents a new challenge for security since if you believe the numbers there will be something like a trillion edge nodes. So it looks more like a biological system and perhaps we need to start to think of security the same way as a body with localized defenses, signalling mechanisms, and not the brittle way that anti-virus is done with simple signatures today. Security is better with diversity so cannot have a single means of attack.

Randy sees changes in the market with system companies building chip teams (e.g. Apple, Google, Amazon). It is system companies that know their markets and a lot will be high-volume consumer with very short time to market. IoT will need the equivalent of agile software development but for hardware.

Everyone agreed that verticalization has been a trend for years, especially in the area of design above RTL which can be very market specific. Clearly need cosimulation with sensors, analog etc to be able to do the architectural design.

Power is clearly going to be a big issue. Gary pointed out that servers are not a major driver of the ITRS roadmap. Many will be heterogeneous, designed for specific functions.

The design cycles will be short so this is a real opportunity for FPGAs (although, of course, power is the weakness of FPGAs which inevitably are somewhat wasteful compared to custom gates).

Gary pointed out we are starting to have cross industry competition. The car guys want you to buy a car that has all the electronics and you pay a subscription (think onStar). The phone people want all the smarts in apps on your phones and a receptacle in the car to plug it in.

Software is going to be a big part of the market. Not clear what operating systems will be used or how money will be made. Is Mentor making real money in embedded? The embedded market put all the money into the RTOS and gave away tools and so when open source OS’s like Linux came along that business fell apart. The real business is the tools. Although architects write code in C++ they are actually doing hardware design.

The challenge for the tools are to bring software and analog/sensor closer together. This are probably teams that are not going to buy an emulator.

Economics of designing “smart dust” may need one company, not layers of integration. Build sensor, processor, radio, OS all in one place.


More articles by Paul McLellan…


Seen at DAC! Self-Driving Cars –Victory Lap or Pile-Up?

Seen at DAC! Self-Driving Cars –Victory Lap or Pile-Up?
by Holly Stump on 06-05-2014 at 6:00 pm

It is axiomatic that the DAC vendor community would love to serve the exciting and expanding automotive market; and the auto community would love to continue to increase their value through innovative software/ hardware solutions, which will one day lead to the self-driving car. But how do we team to lap the track?

Jim Hogan set the stage for the Heroes DAC panel with the scope of the automotive challenge, both software and hardware… stressing that software defines user experiences.

Alain Labat, Managing Director at Harvest Management Partners, LLC. spoke to some current statistics on the growth of software complexity in automotive, with actual, hot off the presses data..

All leading to the much-bemoaned software productivity gap…



Virtual prototyping was identified and discussed as one key technology solution to address this software productivity gap. Its ability to accelerate embedded software development, and unify the development chain across multiple groups and companies, will be increasingly necessary.

Dr. Sridhar Jagannathan, Chief Innovation Officer and EVP at Persistent Systems, then asked “How Intelligent is Your Car? And Is Your Car More Intelligent than You?”

He introduced an innovative new concept, VIQ® – Vehicle Intelligence Quotient® (Persistent.) VIQ is a new four-axis scale for measuring and comparing intelligence of cars, as shown below. And you may even be able to compare your own IQ with your car’s VIQ!



And Martin Baker, Senior Manager of Ecosystem and Business Management at Renesas, probed three critical societal and technology questions…

Why do we need self-driving cars?

  • Safety, with 1.2M deaths and 20-50M injuries peryear
  • Time
  • Congestion
  • Emissions/fuel economy

How might self-driving cars evolve? We have many of the pieces today – it is an evolution, to integrate and grant greater control authority to the vehicle.

Phase 1: Driver Awareness -> Car assumes control in specific situations

  • “Cocooning”- Driver keeps control in normalcase, car intervenes in emergency
  • Adaptive cruise, lane keeping, blind spot,navigation -> supercruise
  • Emergency brake assist, lane keeping -> traffic jam assist
  • Auto park
  • Meanwhile, Google car is an interesting approach to specific situations

Phase 2: Autonomous driving car

Phase 3: Adding V2x communication

  • Initially to enhance driver’s/car’s situational awareness
  • Then, co-ordinated behavior between cars –platooning, route co-ordination, junction co-ordination etc.

Enablers……Hardware enablers for self-driving cars include:

  • Computing power – High end SoCs with dedicatedimage processing, sensor fusion capabilities, high efficiency (R-car Gen 2)
  • Bandwidth – Ethernet AVB, largely in silicon
  • Off-board communications – V2V, V2x
  • Safety – to ASIL D (RH850 P1x), mixedcriticality
  • Security – in silicon (RH850)

Software enablers include:

  • Safety.ASIL D, fail operational
  • Model-based development
  • Building blocks already in place
  • Over the air updates, with security
  • Ability to detect and respond to new security threats

Martin also discussed EDA, and how the industry must address development challenges, especially dramatically more software; requirements formore integration across multiple systems and partners; and ISO 26262, which will evolve.

It was acknowledged that automotive requires a step change in speed and efficiency. This will entail a new Development Ecosystem, and partnerships for innovative tools and processes, addressing automation; virtualization;integration; affordability; and accessibility.

A general call to action was proposed, with collaboration between EDA solutions companies, and the automotive players.



Panelists

  • Martin Baker, Senior Manager of Ecosystem andBusiness Management at Renesas, a leader in automotive semiconductortechnology. Martin spent 25 year at Ford Motor Company where he was global headof embedded software, process and tools, electrical architecture and Europeanhead of electrical system integration.Martin was CEO at Invirtech and led the Automotive Business Unit atPacific Insight Electronics.
  • Alain Labat, Managing Director at HarvestManagement Partners, LLC. Previously, Alain was President and CEO of VaSTSystems (now Synopsys), providing embedded software and virtualizationtechnology to automotive and other applications.Alain was CEO and Co-founder of SequenceDesign and was SVP at Synopsys and VP at Valid Logic Systems (now Cadence).
  • Dr. Sridhar Jagannathan, Chief InnovationOfficer and EVP at Persistent Systems. Sridhar is responsible for new models ofinnovation and growth at Persistent, which has a strong consulting practicearea in automotive. Previously: Vice President, CTO Office at Intuit, Inc.;Managing Director for Symantec’s India Development Center for consumerproducts; Vice President of Technology for Softbank Emerging Markets Fund; andTechnical Director for Internet & eCommerce at Oracle.


Also read:
Google Robot Cars are Coming!


lang: en_US


The Best and Worst of #51DAC!

The Best and Worst of #51DAC!
by Daniel Nenni on 06-05-2014 at 4:00 pm

When people ask which DAC is the most memorable I used to say my first because I was a new college grad and it really was exciting. The next DAC was memorable since it was in Las Vegas and my beautiful wife joined me. This year was DAC number 30 for me and of course it will be the most memorable since I signed hundreds of copies of “Fabless: The Transformation of the Semiconductor Industry” during the book signing Tuesday night and signed more just walking around the exhibit hall, and for that I thank everyone.

When Paul McLellan, Daniel Payne and I started blogging on SemiWiki in 2011 we wanted to bring social media to the semiconductor industry. It was our feeling that if you wanted to get the younger generation involved you had to speak their language. Fortunately I have four children who taught me how to speak it and now SemiWiki has surpassed the one million user milestone. What an amazing experience this has been, absolutely.

When Paul McLellan, Beth Martin, and I decided to write a book the motivation really was to bring the history of the fabless semiconductor industry to an even wider audience. We made good progress this week by giving away 1,500 books with the help of eSilicon, Atrenta, Tanner EDA, Solido, and EDA Direct. Online sales of the eBook had another spike this week as well and for that I again thank you all.

DAC seemed to be somewhat “mature” this year (I’m surprised AARP did not have a booth) but the technical content was very good based on the presentations I attended. The foundries were the stars of the show in my opinion. TSMC, Samsung, and SMIC all had theaters with non-stop ecosystem presentations. I give the highest honors to Samsung since they showed working 14nm silicon in their booth. I also give best presentation to Philippe Magarshack for his talk on 28nm FD-SOI and very candid answers to my questions.

Where was Intel Foundry? They were doing what they do best, putting out EDA press releases that meant absolutely nothing.

My wife’s best DAC experience was handing out books and watching me sign them. Shushana was intimately involved in the publishing of the book so she knows what an effort it was. She also appreciated that the women’s bathrooms were never crowded. Her worst DAC experience was seeing John Cooley in cargo shorts, seriously, that has to stop. The best booth design/theme, according to my wife and I agree, was Ansys. Great design, very eye catching, very artistic.

The EDAC kick-off party was very good. My wife and I enjoyed listening to Sonia Harrison sing. We also spent quality time with some of the Heroes of EDA. The best party we attended was by ClioSoft at the Press Club. They gave away a pair of Google Glass! We also got a very nice bottle of La Follette Pinot Noir as a parting gift. This was a very classy affair by a very classy company. My wife and I skipped the mosh pit DAC parties and I skipped the dinners that did not also invite my wife. All-in-all a very entertaining week!

There will be many more #51DAC blogs to come so stay tuned. I just wanted to share a first glance and thank everyone involved. DAC is an institution that we should all support, absolutely.

Also read Impressions of #51DAC

More Articles by Daniel Nenni…..

lang: en_US


Embedded Vision Summit

Embedded Vision Summit
by Paul McLellan on 06-05-2014 at 2:32 pm

I was a the embedded vision conference last week. Jeff Beir, the founder of the embedded vision alliance gave an introduction to the field. The conference was much bigger than previous years and almost everyone is designing some sort of vision product. Half of your brain is used for vision so it goes without saying that vision requires a lot of computation. It is the highest bandwidth input channel for devices that need to interact with the physical world.

Vision has been an active research field for a decade but processor performance has now got high enough that it is going mainstream. Processors are up at well over 10GMAC/second (TI’s are at 25GMAC/second) but that threshold was only passed relatively recently. In the real world, vision is hard with varying lighting, glare, fog and other challenges.

Yann LeCun of Facebook showed some of his research on recognition. There has been a revolution in the algorithms in the last couple of years. Yann had a demo with a camera attached to a laptop with a big nVidia graphics engine in being used to run the algorithms. He could point the camera at things and it would tell you what they were (keyboard, space-bar, pastry, shoe and so on). In real time.

Chris Rowen of Cadence/Tensilica presented on Taming the Beast: Performance and energy optimization across embedded feature detection and tracking. This was largely about how to use the Tensilica Image/Video Processor (IVP) for doing things like recognition and gesture tracking. The big challenge is to do this with really low power.


The most important three things are:
[LIST=1]

  • Exploit data locality (otherwise energy fetching data will swamp power and computation rates will slow)
  • Leverage libraries
  • Use tools in tuning

    I talked to Chris afterwards. He told me a bit more about the IVP. It has about 400 additional imaging instructions above the basic processor enabling new apps like that recognition. But power is the big challenge since we can’t put a full-blown nVidia graphics chip in our phones.

    We are going to end up with a hierarchy of power levels with micropower for always-on to know when the rest of the system should wake up (listening or looking, for example). Then there is the system itself, think of your phone. Finally uploading data to the cloud. But some processing has to be done locally, it is too expensive in both power and delay to transfer a whole video (or speech waveform) to the cloud uncompressed. So even with the cloud the power problems do not go away.


    More articles by Paul McLellan…


  • SystemC: User Group update from DAC

    SystemC: User Group update from DAC
    by John Swan on 06-05-2014 at 12:00 pm

    I always enjoy attending the SystemC User group to see what is being done by users of SystemC. This time was no exception. Not only is it FREE, but the professional networking around the meeting, presentations, and break times are terrific.

    There were 5 paper presentations at the North American SystemC User Group (NASCUG) on Monday, including an Accellera standards update from Shishpal Rawat, Chair of Accellera. NASCUG is collocated at DAC. I am on committee for reviewing the presentations. Thanks goes to David Black, of Doulos, for Chairing this event.

    The full agenda is posted at www.nascug.org and the presentations will be made available within a week.

    A highlight for me was the keynote paper on UVM in SystemC. This is a new work by Fraunhofer and gaining wider support within Europe, was done primarily to allow UVM to be extended to the system level. UVM in SystemVerilog is viewed as a block level verification solution. This work has been contributed to Accellera which plans a vote this fall on establishing it as the UVM standard for SystemC.

    The UVM in SystemC has not yet been benchmarked against UVM in SystemVerilog. Up to now they have been focusing on making the implementation fully compatible with the existing UVM standard. Would you like to give it a try?

    Additionally, Intel is working on Out-of-Order parallel simulation in SystemC in order to take better advantage of the multicore processors available today while keeping with the sc_thread and sc_method without change. As you can imagine, this takes a more intelligent compiler that can smartly look for data dependencies at a higher level of granularity. However, when unknowns such as pointers are used, it reverts to not using OoO execution for that. Standard parallel execution can give a speedup of 14x running on 2 Intel® Xeon X5650 CPUs with 6×2 cores each. Using Intel’s OoO execution approach this increases to about 80x speedup. Other tests show a greater improvement. Through my activities I am aware that Intel has been at the forefront of advanced design methodologies. Intel is partnered with the Center for Embedded Computer Systems, University of California, Irvine on this effort.


    Finally, but important to me, was there were two presentations on High Level Synthesis (HLS) at this NASCUG. I was glad to see more work being done on HLS friendly IP. In this case CircuitSutra has developed an AMBA AXI4 bus that uses HLS. This provides more flexibility and hides the detailed protocol details from the user. Similarly, NEC‘s CyberWorkBench HLS tool suite provides a bus generator, the output of which feeds into their HLS tool. A user simply does a read(x,y) or write(x,y) to the bus without concern for the protocol details. Additionally AdaptIP is focused on developing IP using an HLS flow. I have plans to visit them at the IP community at DAC today. See my blog on the HLS for IP Panel in the DAC Pavilion here.

    Please review the NASCUG presentations when you see they are posted (I hope soon) and let me know here what you think. I will post a comment when the presentation slides become available.

    lang: en_US


    Synopsys Galaxy Platform & Lynx Design System supports FD-SOI

    Synopsys Galaxy Platform & Lynx Design System supports FD-SOI
    by Eric Esteve on 06-05-2014 at 11:36 am

    This is a new brick that Synopsys brings to build FD-SOI credibility. We have talked at Semiwiki about FD-SOI technology developed by the LETI and STM, and recently endorsed by Samsung Foundry, offering a more than credible second source to STM. And we have said that the FD-SOI introduction will need to be supported by EDA and IP vendors to be successful. The announcement that Synopsys design flow support of 28-nm FD-SOI technology has been extended to Samsung: this simply means that SoC designers will benefit from all the design tools they need along the flow to generate a production ready IC design in GDSII format!

    Let first take a look at the Galaxy™ Design Platform, a comprehensive solution for cell-based and custom IC implementation, Galaxy RTL and Physical implementation products concurrently balance design constraints by performing intelligent tradeoffs between speed, area, power, test and yield. Galaxy Signoff engines accurately model complex physical interactions to ensure signal and power integrity. Thus, SoC designers targeting 28nm FD-SOI can use the same design flow that they may use when targeting bulk or FinFET technologies.

    Lynx Design System is built to accept various technology plug-in (left side of the picture). A technology plug-in using ST’s 28-nm FD-SOI Process Design Kit (PDK), standard cells and memories, adapts the production-proven Galaxy Design Platform-based RTL-to-GDSII flow for 28-nm FD-SOI SoC designs, accelerating project setup and execution. Lynx automation simplifies and accelerates many critical implementation and validation tasks.

    A technology plug-in using ST’s 28-nm FD-SOI Process Design Kit (PDK), standard cells and memories, adapts the production-proven Galaxy Design Platform-based RTL-to-GDSII flow for 28-nm FD-SOI SoC designs, accelerating project setup and execution. Lynx automation simplifies and accelerates many critical implementation and validation tasks. “The close collaboration between ST design teams and Synopsys led to advanced silicon-proven design enablement solutions that fully leverage the performance and power promise of FD-SOI technology and provide the foundation needed to meet tight time to market windows,” said Philippe Magarshack, executive vice president, Design Enablement and Services, STMicroelectronics. “Our close collaboration with Synopsys has already enabled many successful tapeouts with mutual customers using Synopsys’ Galaxy Design Platform and Lynx Design System.”

    The important word here is « foundation ». Before starting integrating CPU or DSP cores, the design team need to benefit from a complete and automated design environment, which is the case. It will be interesting to monitor the progress made by Synopsys in delivering FD-SOI proven DesignWare IP (Interfaces PHY for USB, PCIe, SATA, HDMI, the MIPI D-PHY and M-PHY, the DDR4 and LPDDR4 PHY, as well as ADC or DAC). We have highlighted in a previous post that STMicroelectronics tend to develop this type of IP, but the license agreement with Samsung does NOT includes these IP. Thus, these FD-SOI related IP represent a new segment for IP vendors like Synopsys, a TAM extension if you prefer. The right question, as of today, is whether Synopsys will port these existing above mentioned mixed-signal IP from bulk to FD-SOI straight away, or if the IP vendor will decide on a case by case basis, basing the decision on customer demand…

    The comment from Dr. Shawn Han, vice president of foundry marketing, Samsung Electronics is interesting as it perfectly summarizes what was written in Semiwiki about the FD-SOI option : “28-nm FD-SOI is an ideal solution for customers looking for extra performance and power efficiency at the 28-nm node without having to migrate to 20-nm. Our close collaboration with Synopsys and ST will enable designers to reduce risk, accelerate time-to-market, minimize power and maximize performance to expand 28-nm FD-SOI adoption.”

    Availability
    The Synopsys Galaxy Design Platform and Lynx Design System with support for ST and Samsung 28-nm FD-SOI process technology are available now from Synopsys. The 28-nm FD-SOI-enabled PDK, standard cells and memories for early design are available now from Samsung.

    From Eric Esteve from IPNEST

    More Articles by Eric Esteve…..

    lang: en_US


    Impressions of #51DAC

    Impressions of #51DAC
    by Paul McLellan on 06-05-2014 at 9:56 am

    So what was the overall theme of DAC this year? Usually there seems to be some trend that is hot. A few years ago it was power, then more recently all the stuff associated with 20nm and 16nm such as FinFETs and double patterning. Those things are still around, of course, and there are new generations of tools.

    One theme is that more design is being done at the system level than before. The enabling technology for much of it is emulation. All three major EDA vendors have a good emulation solution. Prior to emulation the modeling problem was too big of a barrier and it was too challenging to keep the system models synchronized with the RTL. Emulation allows the RTL to be used as the model but with enough performance from the emulator to enable system level work, especially that involving software development or running software workloads.

    One thing that is shaking things up a bit is Samsung’s foundry strategy. They have a clear commitment to create a foundry ecosystem that is competitive. Of course they had two big announcements in the last few weeks. Firstly was the transfer of their 14nm process to GlobalFoundries so that the identical process will be available from Samsung’s fabs in Korea and Austin and from GlobalFoundries’s fab 8 in Malta New York. Since GlobalFoundries was completely uncompetitive at 28nm it left the field open to TSMC pretty much having a monopoly for a couple of years.

    Then there was the announcement that Samsung were licensing FD-SOI from STMicroelectronics. I attended a presentation by Philippe Margashack, the CTO of ST, on the Samsung booth. I already knew a fair bit about FD-SOI since I had to present on it at EDPS this year. It is available for volume production out of ST’s Crolles fab (just south of Grenoble) and will be available from Samsung early in 2015. One thing I hadn’t realized is that since FD-SOI is actually a simpler process than bulk, its cycle time is about 15% less. Another thing I learned was that the back-biasing can be used not just for power/frequency management but also to recenter the process and remove some of the variability.

    TSMC were talking about their 16nm FinFET process where all the design enablement is largely complete. They are starting to talk about 10nm and are doing the preliminary work. But they also have a new process and library at 28nm. I guess one mini-theme of DAC is that 28nm is a process node that is going to stick around for a long time since it is cheap, avoids all the double patterning and variability challenges of 20nm, and has large enough capacity for many purposes. The Samsung licensing FD-SOI is another way of extending 28nm and having a better process at the same time.

    Another piece of news that broke during the show was that Broadcom are getting out of mobile and selling that part of their business to Huawei. The rise of Chinese companies in mobile is somewhat underappreciated here since they mostly don’t sell in the US. But China is such a huge market on its own.

    So no big overall theme for DAC this year. The march of process nodes carries on although it seems that it will be muted, with a lot of design sticking at 28nm and only the most advanced designs going to 14/16nm.

    Also read: The Best and Worst of #51DAC!


    Xilinx’s 16nm vs. Altera 14nm

    Xilinx’s 16nm vs. Altera 14nm
    by Luke Miller on 06-04-2014 at 8:00 pm

    You will not believe this, but the family was picking me up Friday evening from the airport and on the way home… Get this, for real, the wife asks me to cut her hair tomorrow. Now the three of you that read my stuff, know what happened before. I resisted, and firmly said ‘No’…The wife seeing my macho stance began appealing to my engineer’s mind as she started talking economics. My dear manly readers, I stayed strong and I am out $50. Best $50 Mr. Miller spent last week…

    On my travels, I learned some valuable information. One that Xilinx continues to dominate in execution, performance and design wins, and by the way, Xilinx is REALLY shipping 20nm. I must say the Xilinx competition has revealed how desperate they are, after hearing the claptrap they are pumping into the field. All I can say is “Non dolet, Paete!”

    So let’s talk execution, no not the Texas kind, but did you know within hours upon receiving some wafers from TSMC that the Xilinx 20nm UltraScale Gigabit Transceivers were up and running? That is not an accident, but careful design, planning and a relationship with TSMC. The most critical choice of any FPGA company is deciding who is going to build your chips. You can have great tools, architecture and IP but that is nothing without a yielding wafer. Process, Process, process. I had the opportunity of learning ASIC design early on in life and you know what, it’s really, really hard. I would sit in meetings listening to guys with pocket protectors and crooked glasses talk about electrons and poly as if they could see them with the naked eye. Maybe they could, and that’s why the glasses were thick.

    So I guess I will go here, but these are valid questions and concerns. Is Intel able to perform as a foundry? What is wrong with asking? Looking at Altera’s earning reports, the cash flowing out does not appear to be great, one could guess 50 million to Intel or even 100 million. That is not a huge amount of dough in the world of Intel, who did around 52.7 billion in sales, and let’s face it, I do not like it, as you don’t as well, but money is power and money gets things done. So, if the 4M Monolithic logic cell FPGAs start to have poor yields, who pays for that? Not just in FPGA cost and screening but how about your design schedule? How about the other Intel production runs that drive the business? Can all play nice? Can you say Errata?

    Do I think Intel can pull off 14nm? No doubt, time and money fixes everything but congress. Intel’s 14nm foundry experiment need’s a guinea pig or someone to clean the foundry pipe, and frankly I would rather be the 3[SUP]rd[/SUP] customer on the Intel 14nm foundry process than the first. There is just too much that must be perfect for success to occur. While all this is going on, Xilinx is executing, Xilinx 28nm FPGAs owns more than 70% of the FPGA market, 20nm is cranking away and picking up steam, and 16nm is on deck. Do you expect there to be a hiccup on the Xilinx-TSMC well-oiled machine? Not likely, and that is why Xilinx is not only a better performing FPGA but clearly the safe choice when planning your next design, having the confidence the FPGA will be ready for action, when your design needs it, and errata free.

    lang: en_US


    Non-separation of power and performance

    Non-separation of power and performance
    by Don Dingee on 06-04-2014 at 2:00 pm

    How much power does a system consume? The simplistic path to power estimation for a system used to be tossing a few metrics – standby, typical, worst case, with figures pulled from a datasheet, simulation, or physical measurement – into a spreadsheet. After filling the remaining holes with SWAG (scientific wild-ass guesses), and summing things up, there was a bottom line.

    But, that bottom line on power was imprecise at best. Designers were usually after worst case, because it determined system cost. Continue reading “Non-separation of power and performance”