CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Why Did Intel Pay $15B For Altera?

Why Did Intel Pay $15B For Altera?
by Paul McLellan on 06-30-2015 at 12:00 pm

While I was at the imec Technology Forum someone asked me “Why did Intel pay $15B for Altera?” (the actual reported number is $16.7B).

The received wisdom is that Intel decided that it needs FPGA technology to remain competitive in the datacenter. There is a belief among some people that without FPGA acceleration available for vision processing, search and other algorithms that map better onto a hardware fabric than a processor, then Intel will gradually have more and more competitors in the datacenter. Even if you only put that possibility at 50-50 (say) then the “only the paranoid survive” attitude is to get an FPGA acceleration solution anyway. Of course they don’t need to buy Altera to do that. I’m sure Altera (or Xilinx even) would be happy to sell them all the chips they need. But at some point that technology may need to be embedded in which case having it on the same process already counts for something.

The next question was “Couldn’t they just build an FPGA solution themselves? It wouldn’t cost $15B.” At the technical level I am sure that the answer is that they could do it. Intel has great engineers and if they put their mind to it I’m sure they could produce something.

But I see 3 problems with doing it in-house.

[LIST=1]

  • Time. Intel might be able to design a suitable fabric but how many years would it take them to get it up to a competitive standard. Altera and Xilinx have spent decades doing it. Intel would be trying to catch them from a standing start.
  • Patents. It is basically impossible to design an FPGA without violating Altera and Xilinx’s patents. Those two companies have a cold war of mutually assured destruction. But anyone else would get problems if and when they got commercial traction. Intel would probably get problems even earlier. If (say) Xilinx felt Intel was violating their patents blatantly they may launch. Against an FGPA startup, the most they could win would be their entire cash balance which probably wouldn’t cover the legal fees.
  • Software. FPGA is as much about software as hardware. I once did due-diligence for a VC on a hardware fabric (arrays of tiny CPUs) and told the VC to run away fast because the company didn’t even realize they were basically in a software business, where they had no expertise. They, and Intel, could probably build the hardware fabric. But could they build and mature a software tool chain allowing them to take C and other software languages and move them into the fabric seamlessly? That takes years too.

    Besides, Intel has already tried to grow their own FPGAs from seed with Tabula and Achronix, in both of which they were major investors and provided foundry services. Tabula closed its doors. Achronix’s are still open but rumors are not enthusiastic.

    So if Intel wants a mature FPGA fabric with a working tool chain that allows compilation of offload software into hardware, they pretty much have to buy Altera or Xilinx. I don’t think Lattice have powerful enough software or large enough arrays, it’s not what they do. Xilinx are deep partners with TSMC, 10nm just announced. Altera are partners with…Intel (and TSMC too, to be fair). So easy decision which girl to chase at the dance.

    The next question. “So why would Intel want to run a merchant FPGA business?” I have to say that I agree with the question. If I put myself in Intel’s shoes I wouldn’t want to. Mostly they are shipping TSMC silicon and have no opportunity to move it into an Intel fab. The Intel/Altera 14nm arrays are not even sampling (or even taped out, I hear). For anti-trust reasons they may have had to promise to keep the business going as a condition of the deal closing, but otherwise the first thing I would do is shut it down, or at least not invest in it for the future. It doesn’t need enough wafers to “fill the fab”. And it doesn’t move the needle in revenue either (Altera is a little less than $2B, all TSMC silicon, and Intel is $60B or so). So Altera’s merchant business is a pure distraction from Intel’s business in the datacenter and notebooks.

    Who benefits? Everyone else. The Altera 14nm FPGAs have ARM processors on them. Who in their right mind is going to kick off an ARM-based project on Altera FPGAs now? Xilinx would seem a much safer choice. They are not about to exit the merchant FPGA business, nor switch ARM out for Atom, nor fail to get timely access to ARM’s latest and greatest next-generation cores, or whatever your nightmare of choice is.

    With regards to the acceleration in the datacenter question, there are two outcomes. One, it turns out to be really important, which bodes really well for Intel/Altera but also for the ARM/Xilinx ecosystem, which will be basically everyone else other than Intel, including some powerful players such as Qualcomm. Or, two, it isn’t a major factor. ARM’s partners can still compete on the basis of power, price and physical size and may get some traction. And Intel wasted $15B.

    Also Read: Xilinx in an ARM-fueled post-Altera world


  • Analog/Mixed-Signal Data Management with Custom Designer

    Analog/Mixed-Signal Data Management with Custom Designer
    by Majeed Ahmad on 06-30-2015 at 7:00 am

    In recent years, a number of technologies as well as the constant desire for faster and more pervasive mobile communication systems have set in motion a well sustained growth trend for the “next big thing” such as the Internet of Things (IoT), wearables, automobile electronics, advances in medical devices etc. In all these areas of growth, analog and mixed-signal (AMS) designs play a very crucial role.

    With the increased use of analog designs in today’s system-on-chips (SoCs), analog designers are becoming a much sought after commodity. As a result, in part owing to the design complexity as well as the paucity of analog designers at a location, design teams with analog designers are getting distributed over several locations.

    The growing trends such as IoT, connected cars, medical wearables, etc. bring another critical analog design issue to the core: synergy with digital subsystems. In front-end digital design, where most of the design data is in the form of Verilog or VHDL code, engineers usually use tools like Subversion or Git that are commonly used by software developers.

    On the other hand, AMS design mainly involves schematics and layouts, which come as a binary representation. So design data management systems (DDMS) commonly used in software development and digital design are becoming increasingly error prone and inefficient for AMS environment where complexity is rising while the time-to-market window is shrinking.


    Elmos uses ClioSoft SOS data management for analog/mixed-signal design in Custom Designer tool flow

    According to Thilo Schmidt, a design engineer at Elmos Semiconductor AG, the design data management system for AMS environment should first and foremost be aware of the data structure. Moreover, it should be tightly integrated with the design tool flow. He presented a paper on this subject at the Synopsys User Users Group (SNUG) Germany held in Munich on June 25, 2015.

    In the paper titled “Analog/Mixed-Signal Data Management with Custom Designer and ClioSoft,” Schmidt shows how the SOS data management tool can be used for AMS design while being tightly integrated with the Synopsys Custom Designer platform. He presented ClioSoft SOS in combination with Synopsys Custom Designer EDA platform as a case study of how a data management tool can be integrated into an AMS design flow.

    Data Management for Analog/Mixed-Signal Design

    Schmidt opens the paper with an overview of the traditional way of carrying out data management for AMS designs. It’s usually based on a project directory on a common file server that hosts a set of shared project libraries, a PDK library, libraries with IP modules and individual work libraries for each design engineer. Next, he shows why the traditional methods don’t work anymore for large design teams amid issues such as maintenance and syncing of project libraries and accidental overwriting of changes.

    Schmidt also outlines four basic operations in a data management workflow: Check-out, check-in, tag and update. A user who wants to change a design object would start with an update operation on his workarea. Next, he would check-out the design object for editing. After he safely edits and verifies the design object in his workarea, he can check-in the design object, which will automatically create a new revision of the file in the repository. Finally, the design object is tagged as verified.


    Synopsys’ Custom Designer Library Manager with SOS integration

    Then, Schmidt goes into specifics of data management required for AMS design and shows why popular software solutions like Git aren’t suitable for the analog environment. For instance, the data management system for AMS design has to differentiate libraries, cells and cellviews. Moreover, it should know dependencies between individual cellviews. However, the data management system can only be aware of these dependencies if it’s tightly integrated into the design flow and its data structures.

    Here, Schmidt resorts to Synopsys’ Custom Designer use case that utilizes an application programming interface (API) to integrate a third-party data management system, ClioSoft SOS. ClioSoft SOS has a client-server architecture in which server can host several projects, and each project is completely independent and has its own repository.

    One of the prominent features that distinguishes SOS from other data management systems is the use of local cache servers. A user has the option to set up one or more cache servers apart from access to the main repository server. A cache server holds all the current revisions of design objects, and that saves disk space as well as prohibits users to circumvent access control by merely changing Unix file permissions.

    ClioSoft SOS also features a more elegant way to handle external data like IP modules. Here, SOS allows to reference data directly from other projects, so a special IP project can be set up instead of managing the IP modules in the design project.

    Design Flow Integration

    Finally, Schmidt comes to the crux of the subject matter: how to integrate design data management into an AMS design flow. He opens this section by showing how the project workarea is organized around the Analog directory and the Digital directory, which is home to the RTL flow. The SOS data management tool allows the RTL data to be kept in a separate project, but on the same server, so the user can tread digital data differently from analog data.

    The files in the Analog directory are generated as symbolic links to a cache server while the files in the Digital directory are local copies. Having analog and digital development work in the same project hierarchy is highly beneficial because tagging and verification results include both analog and digital domains.


    Thilo Schmidt: Data management should be tightly integrated with the design tool flow

    Schmidt also mentions the tool configuration feature that comprises the release of all tools in the design flow along with configuration options. That ensures all users have access to the same tool releases and configuration to manipulate and verify the design data.

    Schmidt closes the paper with the tapeout use case where last minute changes to the layout after the final verification runs are a common source of errors. He outlines two key objectives that have to be reached during the tapeout: a consistent and verified project state and securing of project state in a reproducible way.

    In this regard, he explains two important SOS features: Snapshot, a permanent tag that cannot be moved between revisions, and a versatile query engine that can be used to analyze the current status of all design objects. Schmidt also provides details of the tapeout flow and how it can be implemented in six concise steps.

    With a number of factors such as increased design complexity, reduced time-to-market and high NRE cost, it becomes difficult to manage and deliver SoC devices successfully the very first time. The traditional approach of handling design data is getting more and more inefficient and error prone with increasing design complexity and team sizes. A way to cope with these challenges is the deployment of design data management systems which is already common in software development and digital design.

    However, due to the complex nature of analog designs, adopting a software-based data management system becomes rather tedious and cumbersome and it becomes important to use a data management system which is aware of the data structure and is tightly integrated with the EDA design tool. A tool like ClioSoft’s SOS platform which is tightly integrated with Synopsys Custom Designer becomes an important tool to improve designer productivity and efficiency.

    Also read:

    Managing Design Flows in RF Modules

    Data Management: Bridging Digital and Analog Domains in RF Designs

    The Secret Sauce of Successful Mixed-Signal SoC Tapeouts


    Xilinx in an ARM-fueled post-Altera world

    Xilinx in an ARM-fueled post-Altera world
    by Don Dingee on 06-29-2015 at 5:30 pm

    When the news broke about the on, off, and on-again Intel-Altera merger a few weeks ago, I checked off another box on my Six Degrees of Kevin Bacon scorecard. That plus a $5 bill gets me a Happy Meal at McDonalds, but in a post-Altera world, it might be worth more.

    On January 16, 2008, I’m sitting in a meeting with some Intel strategic marketing types discussing the embedded market. It’s a brain-picking session with Intel asking open-ended questions about trends and the competitive landscape – no NDA, because Intel isn’t sharing their information. I casually mention the concept of “SoC reconfigurability”, the idea of an FPGA sitting next to a processor core Continue reading “Xilinx in an ARM-fueled post-Altera world”


    More about “MIPI beyond Mobile” Paper at DAC

    More about “MIPI beyond Mobile” Paper at DAC
    by Eric Esteve on 06-29-2015 at 12:00 pm

    The “MIPI Beyond Mobile” paper has been presented during the 52th DAC in San Francisco and I can share the key findings with Semiwiki readers. This paper has been written to synthesize certain results of the “MIPI Ecosystem Survey-2015” and evaluate the impact on the MIPI IP sales in the future. At first the MIPI Ecosystem has really changed between 2012 and 2015. During 2013 and 2014 the number of companies joining the MIPI Alliance in two years has been as high as during the six years before (2007-2012). MIPI technology is becoming very attractive!

    Which companies are joining the Alliance? In majority young companies (start-up), most of these targeting non-mobile (phone) applications like IoT and wearable. But we also see new members coming from Asia (China and Taiwan) developing chips for the mobile phone (smartphone) ecosystem. In summary the MIPI Alliance is renewed by emerging: emerging chip makers targeting mature application (assuming we may call smartphone a “mature” product!), and start-up and well-established companies targeting emerging applications.

    IPnest has been created in 2008 to analyze the IP market so it’s not a surprise if this paper has been proposed in DAC IP track. The initial assumption was that the MIPI IP market should benefit from these emerging (chip makers and application). What could be the metrics to validate this assertion? The first one is the MIPI IP growth rate in comparison with the other similar IP segments, like USB, PCIe, HDMI, etc. Let’s have a look at the ranking by CAGR for these different segments:

    The MIPI IP segment is clearly exhibiting the highest CAGR (47%)… it’s also the youngest IP segment in the list, so we will have to dig more: is this highest growth rate due to the normal penetration of the MIPI technology, or is it the consequence of a real pervasion of MIPI? In other words, is the growth a consequence of MIPI adoption in emerging application or by emerging (Asian) chip makers?

    Better to look at the MIPI IP market in detail, with this ranking by IP vendors in 2014:

    For those who are familiar with the Interface IP market, no surprise, Synopsys is also dominant in MIPI segment (like in PCIe, USB, DDRn, HDMI, etc.). If we look at Synopsys’s customers and analyze their nature (emerging chip maker?) and the targeted application (smartphone or emerging?), we should get a good indication about the MIPI IP market…

    From the company website, we have found these recent success stories:

    MIPI IP for Application Processor

    MIPI DSI & D-PHY IP for Application Processor

    MIPI D-PHY IP for Vision Processor Unit (VPU)

    MIPI DigRFv4 & M-PHY for 4G chips

    The first two companies are emerging Asian application processor chip makers targeting smartphones. Why externally sourcing MIPI IP to a vendor? Time-To-Market (TTM) is certainly a good incentive. As a newcomer (at least compared with Qualcomm and the like), buying MIPI DSI and PHY is not only a good way to accelerate TTM, it’s also a way to avoid re-spin by using a production proven function.

    Movidius is also an emerging chip maker developing Vision Processor Unit, but for emerging markets like IoT and wearable. Movidius core competencies are centered on vision processing and selecting MIPI technology is certainly a strategic option, but developing MIPI D-PHY will not bring any differentiation, outsourcing the PHY and concentrate on how to attack emerging markets is the best option. Even Fujitsu, obviously not a newcomer, serving an ASIC market could validate the initial assumption:

    MIPI IP segment is growing fast and is expected to grow again in the future, because MIPI technology adoption is going beyond pure mobile (phone) to serve emerging applications like IoT and wearable, or because emerging chip makers (serving the mobile market) tend to source MIPI IP externally for faster TTM and safer development by using Silicon proven IP. Just for your information, the latest forecast from IPnest suggests that MIPI IP segment should weight $90 million by 2020, plus or minus 10% and be in the $80 to $100 million range.

    From Eric Esteve from IPNEST


    Synopsy Eats Their Own Dogfood

    Synopsy Eats Their Own Dogfood
    by Paul McLellan on 06-29-2015 at 7:00 am

    One of the most interesting presentations that I went to was the last presentation at the Synopsys Custom Lunch (no, the lunch wasn’t custom, we all got the same, but the presentations were about custom design). Since the last presentation was by Synopsys themselves and not by a customer, it wouldn’t seem promising that it could be that interesting. But, as the saying goes, Synopsys “eats their own dogfood.” This is a phrase used in the software world (mostly) meaning that software companies use their own software.

    But in a sense the presentation was by a customer. It was by Anwar Awad who heads up the IP design group at Synopsys. They use exclusively Synopsys tools for all their IP design. Since they are #2 in IP overall (perhaps depending on how you count Rambus) and #1 in interface IP, they are not someone easily ignored. Although Anwar talked a lot about the difficulty of designing IP for FinFET processes, I want to focus on the bigger picture.

    Bringing up a modern process node such as 14/16nm to volume manufacturing depends on several things. First, obviously, the process needs to be ready and the fab needs to have been built (well, duh). But that is table stakes. Without other requirements being satisfied, the fab will sit empty losing $50/second. To run wafers, the designs need to have been done. And to do the designs, the EDA flows need to be in place.

    But also the IP. Some advanced groups such as Qualcomm develop almost all their own IP and use very little 3[SUP]rd[/SUP] party IP. But most groups are not in a position to do that, they get IP from the IP companies. Often ARM for microprocessors and Synopsys for interface IP. Design groups cannot tapeout their designs until they have access to the IP. So Synopsys is on the critical path to volume ramp in leading edge fabs.

    This has broken the logjam that used to exist with regards to PDK availability. If foundries want timely IP from Synopsys then they need to provide the IP group with timely PDKs. Like those old Visa ads where stores “don’t take American Express” the Synopsys IP group “doesn’t take Virtuoso PDKs” because they only use their own tools.

    One thing that Anwar emphasized was that timely PDKs doesn’t mean waiting until every last number is available in its final form. That is much too late to start development of IP if it is to be available in time. Synopsys starts IP development with version 0.1 of the PDK, knowing that it will go through many revisions before they will get final numbers. Of course this results in a lot of redesign but the alternative is to be late.

    Synopsys have over 400 designers using their own tools for IP design, which in turns drives the tool development to delivering the factors necessary for success. IP design in FinFET processes is different from planar processes. So much of the silicon performance is dominated by layout parasitics that schematic simulation is largely irrelevant. Extracted layout is the only way to go. Of course this adds another wrinkle since now they have to do design with early DRC decks and adjust layout as the rules change.


    Anwar showed the details of the IP development tool flow (above, click to get a version large enough to read) and you can see just how many Synopsys tools are involved.

    Here are his other golden rules for success dealing with moving decks, models, flavors, and project starts:

    • Early LVS clean for extraction
    • Use larger-than-min design rules where possible to minimize changes
    • Leverage common blocks across IPs as much as possible – Single PLL team for example
    • Run density checking after device placement
    • Automation of running DRC with fill in place
    • Single design/layout environment independent of the foundry
    • Layout layers are common across all foundries to improve layout efficiency when moving from process to process


    The diagram above shows the process for 16nm, showing when they started, when new projects were started and so on. This resulted in a lot of testchips for both the various flavors of TSMC and Samsung (which is also GF). The portfolio was USB2, USB3, DDR4, PCIe2, PCIe3, HDMI, DPHY, 10G KR, 16G.


    Although a lot can be done in simulation, ultimately there is no substitute for looking at the “eye diagrams” from measurements of real silicon running at speed (see above). This is especially so for a new process with a new feature such as FinFETs, and especially early in the design cycle when everyone knows that the PDKs are not final, the design rules will change.

    In total they ran over 30 test chips. Presumably most were on MPW shuttles but even so that is a major investment (I don’t know what the financial arrangements were but I doubt Synopsys paid retail).

    Synopsys Custom Design page is here

    Wikipedia page on eating your own dogfood (really) is here


    The First Book on Smartphones!

    The First Book on Smartphones!
    by Daniel Nenni on 06-28-2015 at 8:00 pm

    Now that we all have smartphones you may be interested in how this all came about. There are quite a few books about the smartphone technology and business but one of the first books that emerged after the iPhone era is by SemiWiki blogger and former EETimes Editor Majeed Ahmad:

    Smartphone: Mobile Revolution at the Crossroads of Communications,
    Computing and Consumer Electronics
    .

    In retrospect, the notion of a smartphone emerged in 1998 when Nokia, Motorola and Ericsson joined hands to turn the British computing platform Psion into a smartphone juggernaut called Symbian. However, the smartphone industry remained in doldrums until 2007 when Apple rewrote the mobile playbook with its iconic iPhone device.

    What’s so special about this book? For a start, the book has it all—smartphone episodes about Nokia to Blackberry to Apple and Google. Next, it offers rich information on history, evolution, and technology and business development cycles of smartphones. It also has some very nice reviews from industry experts around the world:

    Henning Wriedt, a veteran technology journalist, likes the book because it covers the era of the smartphone from A to Z. “I have something in my archive, which gives me a complete and detailed overview of an important part of this industry, spanned across nearly 20 years.”


    A smartphone business archive for technology buffs

    Then there is Lyle Appleyard, a computer programmer from Manitoba, Canada, who accidentally discovered this book on Goodreads. “I am not sure that I would have picked this book up if I had not won it on Goodreads. As a history buff and a bit of a geek, this book turned out to be right up my alley.”

    When Appleyard began reading the book, he wondered if there was enough material about smartphone technology and business to justify a book with over 400 pages. He wrote after reading the book: “The author did a good job of gathering a ton of information for this book. It was intriguing to read about all the different companies that contributed to the development of the smartphone. Some I knew, some I had never heard of. It was interesting to read about the problems they had, the problems they caused and the possible future of the industry.”


    The book provides a detailed treatment of Apple’s rivalry with Google

    Sometimes technology books are a dry read which is not the case with this one. What is also unique about this book is that it turns a highly technical subject into an interesting read. “The author did a good job of explaining things,” Appleyard wrote. “It was very educational and shed some light on the smartphone and its history.”

    The smartphone has been the key driver of semiconductor devices for nearly a decade. The book delves into both the hardware and software sides of the smartphone business. It narrates, for instance, how Steve Jobs gave the go-ahead for the iPhone project only when Apple engineers assured him that ARM-powered chips could handle the convergence of voice, data, music, and video.


    The moment when mobile industry changed forever

    The book also features a detailed treatment of ARM based chips and Intel’s Atom chips. Moreover, it provides an insider’s view of key players—such as Apple, Google, Nokia and Microsoft—and charts their respective journeys to smartphone riches. A sense of what worked and what didn’t could be highly valuable for managers working in companies that aim to explore opportunities in the smartphone realm.

    Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronics is available in both paperback and e-book formats.


    Is Interconnect Ready for the Post-mobile SoCs?

    Is Interconnect Ready for the Post-mobile SoCs?
    by Majeed Ahmad on 06-28-2015 at 2:00 pm

    The interconnect technology is one of the unsung heroes of the system-on-chip (SoC) revolution. It’s the on-chip networking fabric that is used to link various IP cores on an SoC floorplan. The technology facilitates links between multiple processors, on-chip memories, hardware accelerators and more. In other words, interconnect is the skeleton and nervous system of an SoC device.

    As chips get bigger to integrate more functions, they require more IP blocks, which in turn, demonstrates the increasing significance of a robust interconnect technology. Chipmakers have been building the interconnect part of the SoC internally through their internal bus groups; some of them still do that job in-house now. However, the increasing complexity of SoC devices has led to the emergence of specialized players like Arteris, the Campbell, California–based IP supplier who labels the SoC interconnect technology as network-on-chip (NoC).


    Arteris calls interconnect the skeleton and nervous system of SoCs

    The NoC interconnect technology of Arteris emulates packet transport networking technique for moving information inside an SoC device. Arteris appeared on the chip scene during the mid-2000s when IC vendors began to put the functionality of two to three chips onto a single large chip. Arteris got its first breakthrough when Texas Instruments morphed its interconnect IP into the OMAP4 application processor.

    In 2006, Arteris shipped its first interconnect IP product, NoCSolution, which TI licensed for its OMAP SoC in 2007. Both OMPAP 4 and OMAP5 application processors have employed Arteris’ interconnect technology. TI’s OMAP4 chipset powered Motorola’s Droid smartphone while OMAP5 won SoC socket in the Amazon Kindle tablet.

    In 2009, Arteris launched its second-generation NoC interconnect IP product—FlexNoC—that featured improved latency and made it easier for chipmakers to use the interconnect technology. Next year, in 2010, SoC powerhouses Qualcomm and Samsung licensed the flexNoC interconnect IP for their mobile chips.

    Another high point for Arteris came in 2013 when Samsung used its FlexNoC technology in Exynos 5 Octa chipset which was shipped in the Galaxy 4 smartphones. Most of the world’s smartphones now use Arteris FlexNoC interconnect IP.

    Fighting SoC Bottlenecks

    The SoC coverage in the trade media is mostly centered on CPUs and GPUs because that’s cool stuff. But it’s crucial to have sophisticated interconnect design that can intelligently address quality-of-service (QoS) requirements for linking different IP building blocks on an SoC. For instance, cameras, CPUs and displays are sensitive about latencies. On the other hand, video codecs are bandwidth hungry.

    The interconnect bottlenecks can result into problems such as routing congestion and timing closure. The repercussions of poor interconnect design also include the increase in die size and delay in time-to-market. Third, and probably the most important factor, is the rising cost of SoC designs.


    Janac: It’s becoming hard for internal SoC teams to keep up with interconnect challenges

    Arteris’ President and CEO Charles Janac points to the fact that the cost of building an interconnect was around $5 million back in the mid-2000s when the SoC movement took off at a larger scale. Now an in-house interconnect job requires an investment of $15 million to $20 million.

    Janac adds that interconnect IP allows SoC designers to optimize latencies according to the requirements of the chip, and that can save chipmakers a lot of money. He claims that Arteris’ on-chip networking technology, which uses packetization and serialization techniques, can save SoC makers a couple of square millimeters in die size, 6 to 7 milliwatts of power, and nearly three months in time-to-market.

    The interconnect technology is going to have a new set of challenges while SoCs are getting bigger and more power powerful to claim a stake in new market segments. For a start, the aggregate width and length of interconnect links inside an SoC will increase and that can lead to a routing congestion and timing closure déjà vu all over again.

    SoC: The Next Frontiers

    The specialized interconnect technology had its first major break in consumer-centric devices like mobile phones that began a relentless push for integrating more features at lower costs during the 2000s. The on-chip networking technologies like Arteris FlexNoS also helped mobile chipmakers address the constraints related to die area, power and time-to-market.

    Fast forward to 2015 and new challenges are ready for the SoC interconnect fabric. First and foremost, there is a rapidly growing infrastructure for datacenters that will inevitably require more powerful SoC designs. Here, chipmakers are going to add more processor cores to boost energy throughput and thus reduce power consumption of datacenters.


    Will interconnect evolve with larger chips and smaller nodes?

    A new class of SoC designs will lead to a change in traffic patterns, and that can result in interconnect bottlenecks. Next up, there is the connected car juggernaut, where brand new audio, video and security applications will require a lot more processing horsepower to run intensive software algorithms.

    The recent wave of mergers and acquisitions in the semiconductor industry is partly about the rising cost of SoC designs for smaller nodes like 14nm and 10nm. So far, popular SoC designs have ventured into high-volume markets like mobile phones to justify higher costs of complex SoC projects.

    Now powerful SoC designs are opening up new avenues in markets such as connected wearables, Internet of Cars and datacenters that demand innovation before high volumes. Here, at this crossroads, interconnect technology, a crucial part of the SoC design, can play a vital role in steering SoCs clear of bottlenecks.

    Also read:

    Automate Timing Closure Using Interconnect IP, Physical Information

    Arteris Flexes Networking Muscle in TI’s Multi-standard IoT Chip

    Arteris Sees Computational Consolidation Amid ADAS Gold Rush


    What’s New in Functional Verification Debug

    What’s New in Functional Verification Debug
    by Daniel Payne on 06-28-2015 at 7:00 am

    We often think of EDA vendors competing with each other and using proprietary data formats to make it difficult for users to mix and match tools, or even create efficient flows of tools. At the recent DAC event in San Francisco I was pleasantly surprised to hear that two EDA vendors decided to cooperate instead of create incompatible formats in the area of functional verification debug.

    Related – Are There Trojans in Your Silicon? You Don’t Know

    VCD
    The Value Change Dump file format has been around as a standard since 1995 as IEEE standard 1364-1995, and it works OK for smaller designs, yet as design size grows the VCD file can become multi-GB in size which really starts to slow down EDA tools in terms of loading, parsing and operating. EDA vendors then came up with proprietary extensions to VCD and other binary formats, but nothing universal has been widely accepted.

    Cooperation
    So the technologists at Cadence and Mentor Graphics decided to cooperate and create a successor to VCD so that modern SoCs with billions of transistors and massive waveforms can be functionally verified in the most efficient manner, saving users time. Ellie Burns from Mentor and Adam Sherer from Cadence presented at the Verification Academy booth at DAC. I first met Ellie at Viewlogic back in the 1990’s and have kept in touch over the years, and she also lives nearby in beautiful Oregon.

    What these companies are proposing is a Debug Data API (DDA) to allow any EDA tool to create or view debug waveform data. Dennis Brophy of Mentor Graphics also wrote an informative blogabout DDA earlier this month. Here’s how the DDA works:

    Cadence has validated this new DDA with their SST2 waveform format, and Mentor with their Visualizer. Some of the benefits of the DDA are:

    • VCD interoperability
    • Data portability
    • Openness

    Adam Sherer blogged about how the DDA uses an open, Apache-licensed source code base so that each EDA vendor can optimize the interface implementation for their own tools.

    Related – Getting the Best Dynamic Power Analysis Numbers

    A demonstration showed simulation data created in the Mentor Questa simulator, then viewed with Cadence SimVision tool. Talking about Questa, I just learned that it has been updated to run up to 4X faster on regression tests, their new Visualizer Debug Environment sped up by 2-5X while taking less memory, running verification management coverage data collection is now up to 10X faster, and running the formal apps can be up to 8X quicker.

    Cadence has committed to using this DDA approach with their newly announced Indago tool.

    Related – SoC Debugging Just got a Speed Boost

    Next Steps
    If you’d like to get involved with the definition and use of DDA, the consider joining this group as they meet in the Valley on July 14th to review the specification.


    Imec’s An Steegen Talks Future Process Technology

    Imec’s An Steegen Talks Future Process Technology
    by Paul McLellan on 06-27-2015 at 7:00 am

    I’m an An Steegen groupie. Once or twice a year I see a presentation by her and it is a great summary in a ridiculously short period of time of all the potential upcoming semiconductor technologies. Yesterday was my annual fix at the imec Technology Forum (ITF). Today I got to sit down with her at the conference center.

    An is different from most people at imec who, as a friend of mine described it, “are born at imec, do their PhD at imec and die there.” An went to the US and worked for IBM for over a decade in Fishkill, NY. She is now the SVP of process technology for imec. She is obviously not the only process technologist who know all this stuff. For sure every semiconductor company has their own experts. But she is the most free to talk about it. When did you last see TSMC or Intel giving details of all the work they are doing beyond 5nm?

    Imec works with all the leading edge semiconductor IDMs and foundries including Intel, TSMC, GF, Samsung, SK Hynix, Micron, Toshiba, Sandisk. There are over 300 engineers assigned from these companies to work at imec. They all work together on programs. Imec is a sort of neutral ground, but it also allows for pre-competitive R&D cost-sharing and gives everyone access to their pilot line for novel technologies and equipment (which they may not even have access to in their own companies).

    An sees part of their job as to see what technologies should be in the funnel for the future, then downselect it. Ultimately the semiconductor ecosystem has to make some decisions on what they will and will not do. The whole ecosystem needs to move since there is no good depending on a piece of equipment if nobody manufactures it, or on a material that is not available. For example, almost everyone decided on FinFETs after 20nm (or at it for Intel). Everyone has agreed not to worry about 450mm for the foreseeable future.

    So I asked An what she saw as the most likely roadmap for the future.


    First, push the fin as far as possible, higher and thinner fins. One big challenge, apart from the obvious fact that the higher and thinner the fin the more fragile it is, is to manage resistance and capacitance with very tall fins. Control of the process is also a big issue since everything is just a few atoms thick.

    Next gate-all-round with lateral (parallel to the substrate) nanowires. Can relax the width a little versus fins.

    Then stacking nanowires. The experience of vertical NAND flash and the techniques developed for doing that should help here too.

    Next perhaps is vertical FETs. Unknown quite what the performance is. One nice feature of vertical FETs is that the gate-length can be varied just like in the olden days, by depositing thicker or thinner material. One big issue is how to connect to the bottom terminal of the gate.


    Metrology is becoming a really big issue. All these vertical approaches also need the deposition to be conformal. There is also the possibility of local deposition which has new metrical needs.

    The big challenge for the ecosystem is that we are still on a 2-year cycle but the roadmaps have to start really early so that everyone (equipment, manufacturers, materials, EDA, IP…) know what to get ready. This is especially acute with metrology who need to know where to focus.

    EDA is no longer at arm’s length (nor is ARM, hoho) from the process stuff. Long gone are the days where SPICE parameters and design rules were all that were needed and the whole flow was up and running. Imec has something they call Design Technology Co-Optimization (DTCO) that focuses on this. For example, they worked closely with the EDA companies very early on double patterning which required huge changes to support all the coloring in everything from layout, place & route, verification, extraction and more.

    A good example of what is required to move the ecosystem is spin devices. These are very very low power but slow. But so is a lot of IoT so maybe they are the perfect match. They are constructed in the backend in the metal stack. For sure active devices in the metal stack will break lots of EDA tools. I doubt you can even describe them in a PDK. But something like this clearly needs time to get ready. They can use the imec pilot line to fabricate the structures before, eventually, moving off to finalize details at each foundry.


    Synopsys Vision on Custom Automation with FinFET

    Synopsys Vision on Custom Automation with FinFET
    by Pawan Fangaria on 06-26-2015 at 7:00 am

    In an overwhelmingly digital world, there is a constant cry about the analog design process being slow, not automated, going at its own pace in the same old fashion, and so on. And, the analog world is not happy with the way it’s getting dragged into imperfect automation so it can be more like the digital world. True, the analog world loves perfection; do the process according to its needs and it’s happy. So, do we have an environment where we can accommodate the persistent demand from the analog world for preserving its unique identity and still deliver the required productivity improvement?

    Since SpringSoft, a provider of custom layout tools, was acquired by Synopsysabout three years ago, I’ve wanted to find out what the digital giant has been doing to automate analog design. I found a great opportunity to talk to my long-time former colleagues Fred Sendig, Fellow at Synopsys and Dave Reed, Product Marketing Director at Synopsys. Fred is a well known technologist in the analog/mixed signal space that I worked closely with in my Cadence days. This meeting was an eye-opener to another level of innovation in the making from Synopsys, only this time, in the analog/custom design world. Instead of promising to fully automate analog design – something analog designers have long resisted — Synopsys’ vision is based on the concept of design assistance.

    In the analog world, designer productivity counts more than anything else; automation can’t help if a designer has to redo the things to make the design perfect. Today, the custom design challenge has significantly increased with the introduction of FinFETs at lower nodes. Multi-patterning is required in the fabrication processes. The number of design rules and their complexities has increased significantly. The FinFET-based devices exhibit higher parasitic capacitances, and a higher resistance in local interconnect at lower-level metal layers. The device is more vulnerable to electro-migration due to extremely thin interconnect. Also, in FinFET designs, one device in the schematic can map to multiple FinFETs connected together in a complex series and/or parallel pattern to achieve desired design strength. So, the layout of even a simple circuit such as a differential pair may require the placement of hundreds of individual FinFET devices in complex matching patterns. It’s evident that automation is needed to counter such complexity and increased work, but how should it be done without violating designers’ intent and while keeping them happy?

    Synopsys envisions preserving designers’ complete control over the layout and providing assisted automation to help increase productivity by 3X or more. The custom design environment envisioned by Synopsys will have several productivity boosters including:

    • Storing and reusing placement patterns and prior building blocks with further customization options
    • Using up-front knowledge of physical and electrical effects
    • Less effort required for creation and optimization, thus allowing designers to work at a higher level without losing their focus on the differentiating aspects of their design at the circuit and layout stages. In this way, the designs will be made perfect as early as possible in the design cycle to reduce the number of engineering change orders (ECOs).

    The Synopsys vision for this custom design platform has circuit design and layout implementation flows working in a closed loop. The circuit design flow would let designers start from design entry and quickly converge on the final layout with a minimum number of ECOs. There would be fast extraction and simulation engines for quick analysis of electro-migration, IR drop, parasitics, etc. for the layout at different corners and its optimization for the best power, performance and area. There would be advanced analysis features to manage results from hundreds of corners.

    The matching placement would keep the designer’s intent intact in the layout. The designer would then have the flexibility to modify it further according to her/his needs. The custom placer and router would perform automatic placement and routing of the devices. The electrical verification could be done during the layout. The physical simulation could be done at any stage before completion of the layout. This flexibility would deliver faster turn-around after any ECO in the layout.

    Synopsysexpects to see ~3x productivity improvement in custom layout design with this approach compared to earlier solutions. Their IP design team has already been using this assisted custom automation flow at advanced process nodes. Synopsys’ MSIP team taped out several FinFET-based designs including USBs, DDRs, PCIe buses, HDMI, DPHY, and others at TSMC16FFP LL/GL and Samsung 14LPE and LPP.

    Synopsys’ new custom design methodology is driven by the increasing challenges in FinFET technology at lower nodes and by customer demand for improved designer productivity. The solution is already in progress and there are several beta customers using it. It will be interesting to see this methodology rolled out. Assisted automation will be an important and effective upgrade for analog designs in the custom design space.

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com