CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

What’s Really Going to Limit the IoT?

What’s Really Going to Limit the IoT?
by David G. Simmons on 11-06-2016 at 8:00 pm

There’s a lot of hype about the Internet of Things (IoT) as anyone who’s reading anything about these days already knows. There’s wearable tech, there’s healthcare IoT, there’s M2M IoT and a host of other areas of the IoT that are all projected to explode over the next 10 years. Billions and billions of devices are forecast.

Those are huge numbers and they are exciting to anyone working in the field, or even observing it. But there’s a problem. A big problem: Power. How will we power these billions of devices? Some of them, of course, will be powered by simply plugging them into a constant power supply. Let’s ignore those because we already have a lot of them (computers). A fair number of them — possibly most of them — will be small, embedded devices: wearables, medical devices, environmental sensors, remote sensors, etc. These will need to be powered by batteries. And there’s your problem. Batteries. Lots of batteries. Boat loads of batteries.

I spent a lot of time, back in the day, researching batteries in order for the Sun SPOT platform to achieve a balance between size, weight, and capacity. Oh, and price. Batteries can be expensive. Very expensive. But the size and weight and capacity of batteries isn’t even going to be the biggest problem with the Internet of Things. There’s plenty of research going on all over the world to make batteries smaller, more powerful, and more efficient. No, just the sheer number of batteries is going to be the problem. And it’s a problem that not enough people are thinking about, and almost no one is talking about.

Here’s what I mean. Let’s take the common number of 20 – 30 billion IoT devices on-line by 2025. Gartner, Forrester (pay-wall), IDC, Ovum, and pretty much everyone else is using this number, and I don’t want to argue about it right now so we’ll just take that as a given and go with 20 billion devices. Now let’s say that roughly half of those devices will be powered by mains, and won’t need a battery. So we’re now left with 10 billion devices with batteries. Some devices can go a year or more on a single battery. Some can only go a few weeks. So let’s, for argument’s sake, say that the average is that about a third of the devices will have to have their battery changed over the course of a year. That seems more than reasonable as it assumes a 3-year duty cycle which is very generous. It seems reasonable, until you do the following calculations:

20B ÷ 2 = 10B — the number of battery-dependent devices.

10B ÷ 3 = 3.4B — the number of batteries that will have to be changed in a year.

3.4B ÷ 365 = 9.1M — the number of batteries that will have to be changed every day.

Do you see the problem now? Changing 9.1 million batteries a day, every day of the year. But it gets worse. Much worse. Now let’s scale that to a trillion devices — a number that is often used when talking about the IoT. Heck, I’ve been using that number myself since 2004! So let’s scale the above calculations to a trillion.

1T ÷ 3 = 333B — That’s a lot of batteries!

333B ÷ 3 = 111B — The number of batteries that will need to be changed in a year.

111B ÷ 365 = 304M — the number of batteries that will need to be changed every single day. That’s 34,000 batteries an hour.

Given those numbers, the IoT will collapse under its own weight. I haven’t extrapolated this to the number of people it would take to change 34,000 batteries an hour, but I’m pretty sure it’s not going to be sustainable if it’s even achievable.

Now, if you’re a battery company, I‘m sure those numbers are quite reassuring, but for anyone looking at how the IoT will actually function, it is clear that those numbers are not just unsustainable, but they are completely unworkable. We’ll need an army of people who do nothing but go from device to device changing batteries, 24 hours a day, 7 days a week, in order to keep up. We clearly need another solution.

The big question is why is no one in the IoT field talking about this? Why is there radio-silence on this looming, crippling problem in IoT? There are only a few select people working on some solutions to this battery problem.

If you’re in IoT, and you’re not already thinking about how to manage the battery problem in your ecosystem, now might be the time to start.


New IoT Botnets Emerge

New IoT Botnets Emerge
by Matthew Rosenquist on 11-06-2016 at 12:00 pm

On the heels of severe Distributed Denial of Service (DDoS) attacks, new Internet-of-Things (IoT) powered botnets are emerging. There are already hundreds of such botnets which exist in the underground hacking ecosystem, where services, code, and specific attacks can be purchased or acquired. New botnets are being developed to meet the growing demand and to circumvent anticipated security controls.

The latest IoT botnet

Researchers have spotted a new IoT botnet called Linux/IRCTelnet. In just 5 days it infected 3500 devices and features an old-school adaptation: using Internet Relay Chat (IRC) as the command and control structure. IRC is a very old technology based upon original chat-boards of the Internet (pre world-wide-web). Many of the original botnets used IRC, a decade ago. It is not particularly difficult to undermine for security software, therefore represents an interesting choice by the attackers, whom I assume are not top-tier (ie. not nation state level).

Linux/IRCTelnet is not based upon the popular Mirai IoT DDoS botnet software, but rather Aidra code. It does however leverage default passwords of IoT devices to gain control. It is just the easiest path at the moment. Attackers will evolve as that door closes, so don’t get too excited and think we can ‘solve’ IoT security with the elimination of default passwords. It is just one chess-move in a long game we are begrudgingly forced to play. Although this Linux bot is still new and small, it could hold potential for more directed attacks and highlights how malware writers are working to differentiate their attack code.

More targets will be explored.

We are already seeing a broad diversity of different telecommunications, political, business, Internet infrastructure, and social sites being targeted. The latest is an attack against the internet access for the country of Liberia. Access to the web has been spotty for customers with attackers at times pushing over 600 Gb/s of data to choke the network. Most access is provided by the African Coast to Europe (ACE) undersea cable and these attacks could affect many other nations in West Africa who rely on this data pipeline.

What comes next?

Expect many more entry-level botnets, which will eventually be supplanted by more professional malware. Thus far, most of the IoT botnets have been basic. This will change as more professional and well-funded players emerge.

Look for the pro’s to do the following when they come into this space:
[LIST=1]

  • Patch/change-passwords of the victim IoT devices after infection, so others can’t take over their prey
  • Setup more sophisticated and concealed Command and Control (C2) structures to make it more difficult to track bot-herders or interfere with their control
  • Implement encrypted communications to the end-nodes, to conceal instructions, updates, and new targeting instructions
  • Begin exploiting OS/RTOS vulnerabilities on higher-end devices to gain more functionality and persistence
  • Begin siphoning data from IoT devices, which can be valuable for many different purposes, including extending attacks further into homes, businesses, and governments

    I predict the next phase of availability attacks will begin right around the time the industry reaches the tipping point in addressing the ‘default’ password weaknesses. Then confidentiality attacks, followed by integrity compromises will come. Brace for a long fight as IoT devices are highly coveted by attackers. This matchup should be exciting as it unfolds!

    Interested in more? Follow me on Twitter (@Matt_Rosenquist), Steemit, and LinkedIn to hear insights and what is going on in cybersecurity.

    Also read: Let’s Talk About Cyber Risks


  • Let’s Talk About Cyber Risks

    Let’s Talk About Cyber Risks
    by Matthew Rosenquist on 11-06-2016 at 7:00 am

    In the last 12 months, we have seen an unprecedented number of cyber-attacks occur or come to light. Sophisticated attacks against governments, businesses, consumers, and the pillars of the Internet itself. The future appears to be fraught with run-away risks. Can security tame data breaches, ransomware, massive DDoS assaults, cyber theft, and attacks against autonomous and internet connected devices which potentially put people’s lives in jeopardy?

    That was the topic for the advisory council members of the Bay Area SecureWorld conference recently held in San Jose CA. As moderator, the task is keeping control of a conversation with a room full of passionate experts who live and breathe these challenges every day.

    In the past year, a number of significant risks have risen. The team had no hesitation in talking about some of the big issues.

    IoT DDoS Attacks
    Consumers and business are feeling the impact of massive Distributed Denial of Service (DDoS) attacks, fueled by insecure Internet of Things (IoT) devices. The sheer impact of data and requests which these botnets can wield is an order of magnitude ahead of where the industry is comfortable. The consensus is that everyone should be worried and the fix is not quick. The IoT industry must change to embrace security across the life-cycle of these devices. In a twisted way, these recent attacks are a good wake-up call for the industry. The group agreed, it is far better to have these incidents occur now, versus down the road when billions more IoT devices are connected to the global Internet.

    Data Breaches
    On the heels of the worst year for healthcare data breaches (2015), the hemorrhaging continues. It is by no means limited to healthcare, as many other sectors are being impacted. An interesting debate emerged challenging the role and impacts of government regulations in this space. One side postulated the government has weakened security by setting a confusing bar, which is too low. Compliance does not make organizations secure, which is an unfortunate mental trap, where many organizations only fund what is needed to achieve the minimal requirements. On the other side, advocates of regulation and auditing pointed out that without a baseline many organizations would fall severely short. As we all work together, assurance is needed to establish confidence other partners, parties, suppliers, and vendors are implementing security controls which meet expectations.

    Nobody believed the legislative process could effectively keep pace with the changes in the industry. But both agreed, that the lack of consistency, readability, and simplicity of regulations is a problem. Complexity increases costs, delays implementation, and causes confusion. Smarter, lightweight, and easily understood guidelines might be an opportunity to benefit the community.

    Credit Card and Online Fraud
    Major retailers saw a drop in in-store credit fraud with the introduction of new ‘chip’ cards in the U.S., accompanied with an correlated rise of online theft, where the ‘chip’ doesn’t play a role. In effect, fraud continues, but the bubble was squeezed from in-store to online properties. It is a predictable outcome when threat agents are viewed as intelligent attackers. They will adapt. Shrinkage figures are not outrageous, but the online security teams are feeling the heat to keep them low. This will likely require a combination of new technology, back-end analytics, and end-user behavioral changes. Greed is a persistent attribute for cyber-criminals. Other activities, such as Ransomware, are also currently painful for consumers, healthcare, and small businesses. Enterprises have their ears open to shifts where they may become the primary target if attackers can find a way to reach into their deep pockets.

    Gone in 60 Minutes
    The industry is full of risks and opportunities. Sitting in a room of experienced professionals who are sharing their insights and experiences reveals one important fact. This must occur more often, if we are to keep pace with the attackers. Our adversaries share information and are masterful at working together to our detriment. We, the cybersecurity community, must do the same in order to survive. Our one-hour together disappeared quickly. I look forward to more meetings, discussions, debates, and venting sessions.

    Interested in more? Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

    Also read: New IoT Botnets Emerge


    Automotive Semiconductor Safety

    Automotive Semiconductor Safety
    by Daniel Nenni on 11-05-2016 at 7:00 am

    One of the more telling trends in the semiconductor industry is the “fabless systems companies” transformation. Systems companies that used to buy chips are now making their own to better control the system they are designing: from the chip, package, PCB, the complete system. Apple is the best example as they are now one of the most influential fabless semiconductor companies. Tesla is another example of disruption in the automotive industry, which brings up another very important trend and that is semiconductor safety.


    Last month at ARM TechCon, Cadence came out in support of Automotive Design for Safety with the industry’s first comprehensive Tool Confidence Level 1 (TCL1) documentation that is compliant with the automotive ISO 26262 standard. Cadence also has more than 30 tools that will contribute to an ISO 26262 compliant development lifecycle which is the broadest EDA tool offering for the automotive industry, absolutely.

    “Proven safety compliance along with a complete design and verification tool flow is a requirement for Infineon so that we can deliver our AURIX microcontroller designs to the market on time and ensure that they meet the safety standards the automotive market demands,” said Dr. Joerg Schepers, senior director, Microcontrollers Powertrain at Infineon Technologies AG. “Cadence’s work with TÜV SÜD provides us with added confidence because its software tools have been properly assessed to support the ISO 26262 standard.”

    Before the conference, I had an interesting discussion about automotive trends and the impact on the semiconductor ecosystem with Rob Knoth, Product Management Director of Digital and Signoff Group, and Randal Childers, Director of Corporate Quality at Cadence. Design for Safety was the focus of the discussion so I wouldn’t be surprised to see the DFS acronym coming about.

    Bottom line:
    Cars are much more complicated than smartphones or even data centers with huge communications components. As we move towards advanced driver assistance systems (ADAS), design complexity is increasing exponentially so qualifying point tools will not be enough.

    The Cadence announcement is not only the first comprehensive TCL1 documentation, offering the broadest tool support for the ISO 26262 standard, Cadence is also offering both digital and custom design and verification flows followed by a digital implementation and signoff flow expected to be completed by the end of this year.

    You can see the Cadence ISO 26262 Compliance page HERE, but first take a quick look at this automotive video which talks about the “Systems of Systems”, it is definitely worth two minutes of your time. Cadence is promoting a holistic system design approach here which encompasses chip, package, and board.

    Attached below is the announcement slide deck which is worth a glance. According to industry analysts, automotive semiconductors will be the fastest growing segment through 2020 which I believe. In looking at SemiWiki analytics, automotive is also a fast growing segment second only to IoT.


    Is FPGA Intel Next Big Thing for IoT ?

    Is FPGA Intel Next Big Thing for IoT ?
    by Eric Esteve on 11-04-2016 at 4:00 pm

    I write this article in reaction to another article from Seeking Alpha titled “Intel Next Big Thing”. I have extracted this from the article:

    The IoT space is growing rapidly with the advent of connected cars, smart homes and a variety of connected devices and appliances. However, before a full-blown ecosystem around these devices is developed, device makers have to deal with power efficiency. The good news is that with the help of low-power FPGAs, the devices can be made power efficient.

    People writing in SA are expected to explain why you should (or should not) buy stock from a company in respect with this company’ strategy. In this case, the “Next Big Thing” for Intel is the Altera FPGA product line and the article explains how Intel could generate high return on their ($37B) investment by developing the FPGA business in Data Center and IoT. Data Center and IoT are completely different stories and the IoT ecosystem interesct with the data center only if you consider that the amount of data generated by a multitude of IoT systems will end up in the cloud, in the data center. Let’s see why I think that the flagship FPGA from Altera (or Xilinx, by the way), the products priced over $1,000, very performant and extensively used in Networking are NOT the best choice for IoT application, if you agree with the prerequisite “have to deal with power efficiency”.

    At first, let me say that I think that FPGA is a great technology. FPGA has brought a benefit of an inestimable value in the fast changing world relying on networking systems to carry the data we are consuming for work or entertainment: flexibility. This flexibility has a cost and I am not talking about IC ASP (multiplied by x10 or x20 for the same function implemented into FPGA), but about power consumption. I have searched the web to find a short definition of the FPGA architecture: “Modern SRAM-based FPGAs have highest densities, but consume a lot of power and need an external non-volatile memory to store configuration bit-stream”.

    This definition applies to both Altera and Xilinx FPGA, and we can verify what “a lot of power” means by taking a look at this figure extracted from an Altera white paper titled: “Leveraging HyperFlex Architecture in Stratix 10 devices to Achieve Maximum Power Reduction”:


    Moving from Stratix V to Stratix 10 device means you move from 28nm to 14nm FinFET technology node. You don’t expect to integrate Transceivers (high speed SerDes based interfaces) into IoT, so let’s focus on Core Dynamic power, decreasing by 42%, as expected when you move from 28 nm to 14 nm, and Static power. In fact, the static power has two components. The first is the leakage power that you would have on any other Bulk of FinFET technology, but the second component is inherent to FPGA technology. This is the power dissipated by the SRAM (remember that FPGA is SRAM based architecture), considered as static as you have to refresh SRAN continuously to keep the FPGA programmed. The author is very proud of the 10 to 12 Watt of static power but imagine that you use such FPGA for an IoT application! Typical IoT has to stays always-on, and the logic to be wakes-up from time to time, but you have to keep the SRAM alive… at the price of this huge static power.

    As far as I am concerned, I would not consider Stratix 10 product line for IoT application, such a static power consumption being far too high (by several order of magnitude) to comply with IoT requirements.

    To end this article on a positive point, Intel is developing interesting new products, like this multi-chip package (MCP) integrating Broadwell CPU and Arria 10 GX FPGA in the same package. Such product will address data center application (not IoT), providing flexibility thanks to the FPGA and should help to slightly decrease the power consumption. The power consumption related to the chip to chip communication (Transceivers in the above figure) should benefit from the lack of package between the two chips. Let say that this is not a revolution, but a move in the right direction to reduce the power…

    Frankly speaking, if the embedded FPGA (eFPGA) technology development becomes effective and if eFPGA could be used in the data center, it could be the revolution: instead of putting a SoC inside a FPGA, or beside a FPGA like in the above example, integrating just the amount of needed FPGA into a SoC would bring both flexibility and lower power. We will need to wait and see if eFPGA adoption will occur…

    Eric Esteve from IPNEST

    About Static Power:

    The leakage power issue is so serious that in its 2009 report, the International Technology Roadmap for semiconductors (ITRS) describes the situation in terms of an existential crisis:

    While power consumption is an urgent challenge, its leakage or static component will become a major industry crisis in the longterm, threatening the survival of CMOS technology itself, just as bipolar technology was threatened and eventually disposed of decades ago (14).



    EUV transition comes into focus

    EUV transition comes into focus
    by Robert Maire on 11-04-2016 at 12:00 pm

    We attended ASML’s analyst day in New York on Halloween. We were very impressed with the quality, content and clarity of the presentations and thought it was one of the best strategic positioning presentations we have seen in the semi industry. We also had an opportunity to meet with several members of senior management after the official presentation to have more detailed and candid discussions.

    We came away with the view that after many years of hard work and obstacles that we are to the point where we have a clearer sense of the timing and remaining issues to be worked through to get into production in a “predictable” timeframe.

    The vision of the EUV promise finally coming into focus is also amplified by the potential upside offered by the synergistic combination of the Hermes acquisition which recently passed its final hurdles.

    Not an “imaging” company but rather a “patterning” company
    Although ASML has spoken about this in the past, the length and content of the analyst meeting was able to articulate ASML’s desire and potential roadmap to dominate the entire patterning process rather than just the litho step and litho cell.

    While Yieldstar has been a successful first step in growing into the overall space, the addition of Hermes adds the foundation of a much larger footprint and marketshare of the overall patterning market.

    It is also clear that ASML is doing what Lam had attempted to do with the KLA acquisition, and that is put together more processes within the patterning arena to dominate this critical area. Without the KLAM combination ASML is somewhat unopposed in stringing together more processes.

    Although it would never pass in todays regulatory environment (especially post KLAM) Lam itself would be a potential acquisition target of ASML to complete its full circle of the patterning process, but there are still other things that ASML can do to seal the deal.

    Building a “Wall”…
    One can think of ASML trying to build a “walled garden” around this patterning technology in order to try to keep others out, such as KLAC , NANO and NVMI etc;. Given that they “own” the litho process they can indeed limit others access to it and more tightly integrate Hermes much as they did with Yieldstar with great success.

    In a way it is very interesting to note that this combination sailed through regulatory approval unopposed as compared to KLAM. This would imply that customers don’t have a problem with giving ASML more dominance in patterning even though they couldn’t stomach KLA and Lam together (obviously Nikon and Canon are too weak and far behind to even matter…as compared to TEL, Hitachi and ASMI etc;)

    Not just sidestepping KLA but taking them head on…

    ASML is not just content about finding a way around the KLAC “actininc” blockade, in which KLA halted development of an at wavelength EUV mask inspection tool, but rather ASML wants to take on KLAC (and NANO, NVMI and others) directly in the CD market by moving the E beam tool of Hermes out of just being a slow, R&D development tool into the arena of HVM monitoring and control which is the wheelhouse of KLA.

    This frontal assault could be significant as ASML has the financial and technical wherewithall to execute on the needed work to get to multi beam and faster E beam tools. In addition since ASML owns the litho process they hold the keys to the information and control knobs which impact the process as compared to KLA which is just an observer. (obviously all this applies to both NANO and NVMI as well…)

    The financial model…
    Even though it sounded like a shock to many in the room the EUV had negative gross margins of 75% it should come as no surprise given where we are in the process. It should also not be a great surprise to expect 50% plus gross margins, similar to other products, when EUV finally gets up and running given ASML’s dominance in litho.

    From a very simplistic perspective EUV tools cost roughly twice that of DUV tools but you need half as many EUV tools because of double patterning issues. This makes EUV somewhat of a “wash” in Litho cost but overall litho intensity keeps increasing which supports the overall revenue increases in the litho space. Essentially, all you need to do is get EUV on a more normal run rate and thus gross margin and EPS growth will fall in line with revenue growth to get to the 8 Euro per share in 2020 as suggested by the company.

    You add to that another billion in revenues and one Euro in EPS for Hermes and you get to a total of 11 billion Euros in revenues and 9 Euros in EPSin 2020.

    The stock…
    Although we are more positively biased over the last 6-9 months the stock is still not cheap. If we assume 9 Euros of EPS in 2020 and use a 15X multiple, we get to a $137 stock price in 2020 (or 2019). Given that we are 3 years away from that with significant execution risk ahead of us, the stock is fairly valued.

    For longer term investors, collecting a dividend with the patience to wait a few years is not too bad either as the certainty of EUV has increased and thus reduced the overall risk model.

    On a relative basis it seems like a reasonably safe long term investment as compared to others in the industry.

    PS; For those who were not able to attend in person , you missed a very politically incorrect halloween costume worn by the head of IR, Craig DeYoung……


    RRAM Redux

    RRAM Redux
    by Bernard Murphy on 11-04-2016 at 7:00 am

    Advanced memory technologies are a perennially hot topic thanks to a proliferation of data-hungry applications pushing our demand for more capacity and performance at less power and area. Among several technology contenders is Resistive RAM or RRAM (also called ReRAM). In this technology a conducting filament is grown through a dielectric on application of a voltage. RRAM is promoted as a replacement for traditional flash (non-volatile) memories and in principle should be a significantly superior solution. It is lower energy, bit-writeable and much faster to read and write. It also lends itself to 3D stacking which could enable high capacities as well as RRAM stacked directly on top of logic.

    But RRAM has offered this hope before, only to get bogged down in an inability to deliver high memory capacity. Building small (~Kb) memories demonstrated all the expected advantages of RRAM – reads and writes in microseconds or less compared with milliseconds for flash, and lower voltage operation with much more fine-grained writeability, thus lower power. But capacity was limited by sneak-path currents on read. In RRAM writing a cell is voltage-based, but reading is current-based; when current flows through a cell you want to read, it also flows through neighboring cells, which makes it difficult to confidently interpret the read value. Those added currents also negate the power advantage.


    Crossbar positions themselves as the leader in this field, claim they have solved the sneak path problem and have transitioned their technology to production. They were founded in 2010 and came out of stealth mode in 2013; they have raised over $80M so far, including a $35M D-round last year (per CrunchBase), so they certainly have the credibility and funding to play in this game. I talked with Sylvain Dubois (VP of marketing and biz dev) at ARM TechCon last month to get a sense of why they believe they have a scalable solution.

    Sylvain first agreed that the big foundries have rejected most RRAM implementations so far because they don’t scale in size. However, Crossbar announced at IEDM in 2014 their own solution to the sneak path problem, using a method they call field-assisted superlinear threshold (FAST) selection which provides very high selectivity. They reported the method suppresses sneak currents in a 4Mb array to below 0.1nA across the commercial temperature range and selectors reliably cycle over 10[SUP]11[/SUP] cycles. He also noted that RRAM has a lower leakage current than flash as feature size decreases. Crossbar summarize a partial comparison of their RRAM with flash technologies below.


    An important feature of the Crossbar RRAM is that it is compatible with standard CMOS processes, which makes it usable as an embedded macro in a larger design. They announced earlier this year a partnership with SMIC to provide this technology on a 40nm process. They have built an 8Mbit reference macro which is now available for licensing and they are actively working with customers to adapt the macro to their needs. They expect to pursue opportunities in embedded and IoT applications especially (obviously) in China. They are also working on a partnership with one of the mainstream foundries at a more advanced process node, announcement still TBD.

    Based on density measurements they have gathered so far, Crossbar expects they should be able to scale up to terabits per die. If they are right, this could be a real game-changer for non-volatile memory both in embedded and mass storage applications. You can get an overview of Crossbar technology and applications starting HERE. There’s a detailed set of slides on 3D capabilities HERE.

    More articles by Bernard…


    How to nail your PPA tradeoffs

    How to nail your PPA tradeoffs
    by Beth Martin on 11-03-2016 at 4:00 pm

    How do you ensure your design has been optimized for power, performance, and area? I posed this question to Mentor’s Group Director of Marketing, Sudhakar Jilla and product specialist Mark Le. They said that finding the PPA sweet spot is still often done by trial and error – basically serial experiments with various input parameters until the target specs are met.

    Is this efficient? Clearly not. It could take weeks or months or really never come to fruition because of deadlines. Jilla and Le say that the ugly reality of finding the optimal PPA under the pressure of tight design schedules has been ripe for better EDA solutions.

    What’s needed is an RTL-level automated “design space exploration” that lets designers simultaneously explore various design alternatives prior to implementation. The solution must be efficient, easy to use and provide the most useful analysis in the shortest time.

    Mentor offers a design space exploration solution in their physical RTL synthesis tool, Oasys-RTL. Jilla says it is different than other available ‘what-if’ analysis solutions because it works at a higher level of abstraction, which makes it faster, while still achieving a good level of accuracy. He said it is also pretty easy to use. Oasys-RTL’s integrated commands control the various configurations; the designer just modifies existing synthesis scripts to add new variables to define the functionality of the desired exploration.

    Download the new whitepaper RTL Design Space Exploration for Best PPA Using Oasys-RTL.


    For example, Le poses a situation in which a designer needs to determine the top frequency at which the design can meet timing. If the performance is set too high, critical timing paths will be extremely difficult to close. Also, large SOCs have multiple complex clocks and frequency tuning becomes a challenge when there are tens or hundreds of clocks. Say the design uses four multi-vt libraries with four target frequencies.

    Library: LVT 0.95 V, LVT 0.85 V, HVT 0.95 V, HVT 0.85 V
    Frequency: 0.8 GHz, 0.9GHz, 1.0 GHz, 1.1 GHz

    There are 16 different combinations, or scenarios, to analyze. Oasys-RTL processes each of the 16 scenarios as if they were individual configurations, and then provides a comprehensive summary of results for comparison. The results show the design meets the 0.9 GHz timing target for both LVT libraries.

    Say you also want to include power in this analysis. Le points out to a case in which the results were counter-intuitive because the HVT 0.95 V library consumed more power than the LVT 0.85 V library. It turns out that the slower HVT library requires more optimization to meet timing, which causes increased area and power.


    What about floorplan exploration? Jilla says that Oasys-RTL reads in the entire design and automatically creates a floorplan based on the high-level RTL modules and design data flow. Because it uses a patented “PlaceFirst” technology, physical placement information is available early in the design flow for accurate timing and congestion analysis.

    Starting from scratch, you can set utilization targets for the design and use a design space exploration command to scale the values automatically, stepping through higher or lower increments. Other physical attributes can be manipulated as well to change aspect ratios, die size, macro grouping, macro packing, pin locations, etc.

    This image shows three unique production-quality floorplans in parallel based on different recipes. Jilla says that real customer experiences have shown that Oasys-RTL reduces the time required to generate production quality floorplan to a matter of days.

    The analyses generate reports that can be saved to a comma separated values (.csv) file to import into a spreadsheet. The CSV format can further be employed to create graphs, charts and scatter plots to help visually analyze the metrics. From this vantage point, managers can decidedly select the best power, performance, or area.

    To learn more about the getting the best PPA with Oasys-RTL’s design space exploration, download the new whitepaper from Mentor.


    A Peek Inside the Global Foundries Photonic Death Star!

    A Peek Inside the Global Foundries Photonic Death Star!
    by Mitch Heins on 11-03-2016 at 12:00 pm

    Last week I wrote about the Photonics Summit and hands-on training hosted by Cadence Design, PhoeniX Software and Lumerical Solutions and in that article I mentioned that Ted Letavic of Global Foundries laid out a powerful argument for why integrated photonics is a technology that is going main stream. This article dives into more details from Ted’s presentation. There are some basic misconceptions about photonics that need to be cleared up and Global Foundries did a good job of doing that in Ted’s presentation.

    The first misconception is that integrated photonics will be a small niche market. Ted did a nice job of pointing out that the major growth driver for photonics will be cloud-based computing. Up to 75% of enterprise IT deployments are now hybrid-Cloud based. Cloud deployments are driving most of the server, network and storage growth, and it’s that grow that will drive a 10X growth in data center traffic over the next five years. Mobile data is another contributing part of this growth and it alone is forecast to grow at an astounding 53% CAGR from ~6 exabytes (EB) in 2016 to over 30 EB in 2020. In conjunction with greater data volumes comes the need for greater data bandwidth and flexibility. Ted noted the two biggest drivers for increased bandwidth as being the new 5G standard for cellular networks and the dis-aggregation of the data centers with suppliers moving away from super centers to many smaller centers that are connected together with high band-width networks. Both of these drivers will require increased bandwidth density and speed and decreased latency. With this in mind, networking bandwidth is forecast to double every two years for the foreseeable future and integrated photonics will be the prevalent solution in all areas of networking for telecom (long and short haul), mobile networks and data centers. Transceivers alone for telecom and datacom are forecast to be a $3B market by 2020.

    The second misconception is that integrated photonics is still in the labs and hasn’t made it to the production fabs. Global Foundries made it abundantly clear that they are ready to take production runs in as many as three different fabs (Fishkill 90nm/300mm, Burlington 90nm/200mm and Singapore 45nm/200-300mm). All of these fabs are able to run SiGe (silicon germanium) on SOI wafers and support PDKs with all of the necessary components for integrated photonic designs including vertical grating couplers, low loss edge couplers, dense high-contrast waveguides and passive components as well as high-speed active modular and photo detectors.

    A third misconception about integrated photonics is that because photonic components are large in comparison to their transistor counterparts that 300mm lines would be overkill for such devices. As it turns out, signal loss is a key concern of large photonic circuits and many of the major sources of loss such as line-edge roughness in waveguides, alignment errors at junctions, and line-edge placement errors of resonant structures caused by poor critical dimension (CD) control, can be mitigated by 300mm tooling. Global Foundries showed results comparing their 200mm and 300mm tooling with the 300mm lines having a 3-5X reduction in CD and overlay errors, 2.5-3X reduction in line-edge roughness and a 4-5X reduction in CD and overlay errors in modulators giving them a substantial boost in their RF definition. This tooling combined with judicious optical proximity correction (another staple of 300mm processing) makes for a very low loss photonic platform.

    A last misconception about integrated photonics is that monolithic solutions combining electronics and photonics are a long way off. Global Foundries has a solution now, says Letavic. Global Foundries’ offering boasts monolithic and hybrid process integration including high bandwidth RF and Analog for broadband systems and 5G synergy. To strengthen their offering, Letavic also pointed out that Global Foundries has a wealth of capabilities for handling advanced packaging (C4/Cu pillars, TSVs and MCMs) and test requirements and have included support for integrated photonics by adding lower-cost passive fiber alignment-and-attach technologies and surface grating couplers for inline on-wafer testing.

    Letavic rounded out his presentation by outlining the fact that they have PDKs for their capabilities now that are compatible with the Cadence, PhoeniX, Lumerical EDPA (electronic-photonics design automation) flow covered by the rest of the photonic summit.

    As I mentioned in my last article, this truly is a watershed event for photonics. The AIM Photonics effort in the U.S. needed a production fab into which designs could go from prototype to production and now they have not one, but three!

    Also Read: The Fabless Empire Strikes Back, Global Foundries and Cadence make moves into Integrated Photonics!