webinar banner2025 (1)

Advanced Materials and New Architectures for AI Applications

Advanced Materials and New Architectures for AI Applications
by admin on 10-17-2018 at 7:00 am

Over the past 50 years in our industry, there have been three invariant principles:

  • Moore’s Law drives the pace of Si technology scaling
  • system memory utilizes MOS devices (for SRAM and DRAM)
  • computation relies upon the “von Neumann” architecture

Continue reading “Advanced Materials and New Architectures for AI Applications”


Does the G in GDDR6 stand for Goldilocks?

Does the G in GDDR6 stand for Goldilocks?
by Tom Simon on 10-16-2018 at 12:00 pm

In the wake of TSMC’s recent Open Innovation Platform event, I spoke to Frank Ferro, Senior Director of Product Management at Rambus. His presentation on advanced memory interfaces for high-performance systems helped to shed some light on the evolution of system memory for leading edge applications. System implementers now have to choose between a variety of memory options, each with their own pros and cons. These include HBM, DDR and GDDR. Finding the right choices for a given design depends on many factors.

There is a trend away from classic von Neumann computing, where a central processing unit sequentially processes instructions that act on data fetched and returned to memory. While bandwidth is always an issue, for this kind of computing latency became the biggest bottleneck. The frequent non-sequential nature of the data access exacerbated this problem. That said, caching schemes helped this issue significantly. The evolution of DDR memory was driven by these needs – low latency, decent bandwidth, low cost.

Concurrent with this, the world of GPUs faced different requirements and developed their own flavor of memory that had much higher bandwidth – to accommodate the intensive needs of GPUs to move massive amounts of data for parallel operations.

HBM came along as an elegant way to gain even higher bandwidth, but with a more complex physical implementation. HBM uses an interposer to include memory stacks in the same package as the SOC. HBM wins in bandwidth and low power. However, this comes at a higher cost and more difficult implementation, which are serious constraints for system designers.

Let’s dive into the forces driving leading edge system design. According to Frank, with the exponential explosion of data creation, data centers are being pushed harder than ever to keep up with processing needs. Also, AI and automotive are other big factors that Rambus is seeing changing the requirements for systems. Traditional process scaling is slowing down and cannot be depended on to deliver performance improvements. Architectural changes are needed. One such example that came up in my discussion with Frank was how moving processing closer to the edge to aggregate and reduce the volume of data is helping. It is estimated that ADAS applications will demand 512 GB/s to 1000 GB/s to support Level 3 and 4 autonomous driving.

One of the major thrusts of his TSMC OIP talk is that GDDR is in a goldilocks zone for meeting these new needs. It is proven cost effective technology and has very good bandwidth – which is especially important for the kinds of parallel processing needed by AI applications. It also has good power efficiency. He cited the performance of the Rambus GDDR6 PHY, which has 64GB/s in bandwidth at pin rates of 16Gb/s. Their PHY supports 2 independent channels and presents a DFI style interface to the memory controller.

With their long experience in high speed interface design, Rambus is able to offer unique tools to their customers to help determine the optimal memory configurations for their designs. The Rambus tools also can help with design, bring up and validation of interface IP, such as the GDDR6 PHY.

Frank mentioned that they have an e-book available online that talks about GDDR how it is evolving to meet the needs of Autonomous vehicles and data centers. It has some great background information on these applications and also offer insights into how GDDR is advancing with in performance in 2018 with GDDR6. The e-book is available for download on their site.


Who is Responsible for SIP Revenue Decline in Q2 2018?

Who is Responsible for SIP Revenue Decline in Q2 2018?
by Eric Esteve on 10-16-2018 at 7:00 am

According with ESDA, EDA revenues have grown YoY by 16.2% in Q2 2018, and this is the good news for our industry. The bad news is the decline of SIP (Design IP) revenues, by (3.1%) at the same time. As far as I am concerned, this figure looks weird, so I will try to understand the reason why SIP category can go wrong in a healthy EDA market, indicating a growing design starts number.
Continue reading “Who is Responsible for SIP Revenue Decline in Q2 2018?”


Avionics and Embedded FPGA IP

Avionics and Embedded FPGA IP
by Tom Dillinger on 10-15-2018 at 12:00 pm

The design of electronic systems for aerospace applications shares many of the same constraints as apply to consumer products – e.g., cost (including NRE), power dissipation, size, time-to-market. Both market segments are driven to leverage the integration benefits of process scaling.
Continue reading “Avionics and Embedded FPGA IP”


The Shape of Semiconductor Things to Come

The Shape of Semiconductor Things to Come
by Robert Maire on 10-15-2018 at 7:00 am

Given that the semiconductor industry is clearly in the midst of a down cycle (even though there are cycle deniers, also members of the flat earth society…), most investors and industry participants want to know the timing of the down cycle and the shape of the recovery as we want to know when its safe to buy the stocks again. We also want to try to predict the ramp rate of the recovery so as to value the growth of the industry and the stocks.
Continue reading “The Shape of Semiconductor Things to Come”


Technology Behind the Chip

Technology Behind the Chip
by Daniel Nenni on 10-15-2018 at 7:00 am

Tom Dillinger and I attended the Silvaco SURGE 2018 event in Silicon Valley last week with several hundred of our semiconductor brethren. Tom has a couple blogs ready to go but first let’s talk about the keynote by Silvaco CEO David Dutton. David isn’t your average EDA CEO, he spent the first 8 years of his career at Intel then spent more than 18 years at wafer processing equipment company Mattson Technology including 11 years as their president and CEO.

David started his keynote with semiconductor market drivers (PC, Mobile, IoT, and Automotive) and moved quickly into Artificial Intelligence and Machine Learning which will drive the semiconductor industry across most market segments for years to come, my opinion.

It is a little frightening if you think about it and obviously I do. Not only will our devices be smarter than we are, they will outnumber us by a very large magnitude. Privacy as we once knew it will be gone in favor of security, automation, and social media obsessions.

Automotive is an easy example. Future cars will be very much silicon based with limited human interaction. I remember waiting impatiently for my 16[SUP]th[/SUP] birthday so I could drive legally. I got my pilot’s license when I turned 20 which was an even greater thrill. My grandchildren will have thought controlled flying cars so what will they look forward to when they turn 16?

David did a nice review of the technical challenges for the semiconductor industry then moved to more detail about Silvaco and the transformation they have made over the last four years. We have worked with Silvaco since 2013 starting with the blog ABrief History of Silvaco. Since then we have published 62 Silvaco blogs that have been read close to 300,000 times so we have had a front row seat for this amazing transformation.

The slide deck is located HERE. It is 38 slides (24.8MB) but it is a very quick read and does a very nice job summing up Silvaco and the chip challenges they address. Here is the summary slide for those who need more motivation to download the presentation or are on a mobile device:

  • Global EDA Leader driving growth to provide value to our customers in Advanced IC, Display, Power and AMS
  • Provider delivering complete smart silicon solutions for predictive and comprehensive design work before applying $$$ to Silicon
  • Utilizing acquisitions combined with organic development to drive high growth rate
  • Custom CAD supports fabless community across many foundries
  • Strong IP division with leading automotive IP, Processor Platforms and unique Fingerprint tools
  • Financially strong driving double digit growth from the balance sheet
  • Supporting the industry ecosystem

I attend more conferences than most and have organized a few so for what it is worth here are a couple of suggestions. The conference itself was pretty well done but I would definitely serve lunch and if you want to attract the engineering crowd have tables in the lunchroom for all of your products or market segments and have them staffed with your field support people (AE/FAEs). Keynotes and breakout sessions are nice, marketing people are entertaining, but the face-to-face interactions with current and future customers should be left to the people who do the real work, my opinion.


ARM TechCon 2018 is Upon Us!

ARM TechCon 2018 is Upon Us!
by Daniel Nenni on 10-12-2018 at 12:00 pm

ARM TechCon is one of the most influential conferences in the semiconductor ecosystem without a doubt. This year ARM TechCon has moved from the Santa Clara Convention Center to the much larger convention center in San Jose. Last year the conference seemed to be busting at the seams so this move makes complete sense. A little less convenient but hopefully more room for people and exhibitors to network with, absolutely.

The highlight of this year’s conference is me signing Prototypical books in the S2C booth #933. After spending that last three years researching the emulation and FPGA prototyping market I would be happy to share with you what I have learned and where I see the industry going from here. The books usually go fast so try and stop by on Wednesday or early Thursday.

Not that you would need further justification to attend ARM TechCon, but if you do here are the top points from the conference justification toolkit:

Comprehensive Education
For three days, you’ll choose from 60+ hours and seven tracks of embedded systems learning tailored for developers, engineers and executives.

Hands-On Training
It’s one thing to study from afar, but quite another to put your hands to work. Take advantage of practical training opportunities over two full event days.

Exclusive Networking
With more than 4,000 attendees and 100 suppliers, the potential for finding solutions to your challenges and making lasting relationships is huge.

Valuable Resources
You’ll get exclusive access to speaker presentation decks, excellent tools for your post-conference presentation.

Industry Expertise
Arm TechCon is developed entirely by engineers for engineers thanks to the Technical Program Committee, an elite group of practicing engineers.

You can see the full agenda here with keynotes and such. I will definitely be at the opening keynotes. Simon Segars’ keynote on the Fifth Wave of Computing is a must see and I want to see SoftBank’s COO’s keynote on Collaborating to Deliver a Trillion Device World.

Other than that I will be walking the floor and signing books at booth #933 (S2C Inc.) with my good friend Steve Walters. Steve is also an Emulation and FPGA Prototyping professional. He spent 10 years at Quickturn before it was acquired by Cadence. Steve and I worked together at Virage Logic so between us we know it all. Stop on by and give us a chat.

One of the hot topics again this year is hardware security and if that is your interest here is a great place to start:

Tortuga Logic
to Demo Latest Hardware Security Solutions at ARM TechCon 2018 Tortuga Logic, a cybersecurity company that identifies vulnerabilities in chip-design, invites press attendees to visit ARM TechCon 2018, booth #1032, for a software demo on Oct. 17-18.

Tortuga Logic is solving the problem of rampant chip security flaws, such as Meltdown and Spectre, with its cutting-edge system-level security solutions.

Who: Tortuga Logic is a cybersecurity company located in San Jose, Calif. Jonathan Valamehr (COO), Andrew Dauman (VP of Engineering) and Juan Chapa (VP of Sales) will be at booth #1032 to showcase solutions that identify hardware security vulnerabilities in chip designs.

Why: Tortuga Logic will demonstrate the capabilities of their flagship product Unison, a software platform that analyzes the security of a chip design alongside the normal process of ensuring the chip is functionally correct. The company will also announce the recent addition of Juan Chapa, the company’s newly-appointed VP of Sales.

When: Wednesday (10/17), 11:30 a.m. – 6:30 p.m. Thursday (10/18), 11:30 a.m. – 6:00 pm.

About Tortuga Logic:
Tortuga Logic is a hardware security company offering a suite of security verification platforms to reduce the effort spent identifying security vulnerabilities in modern semiconductor designs. Founded by experts in hardware security, Tortuga Logic’s patented technology augments the industry standard verification tools to enable a secure development lifecycle (SDL) for modern semiconductor designs. For more information, please visit http://www.tortugalogic.com/


EDA Cost and Pricing

EDA Cost and Pricing
by Daniel Nenni on 10-12-2018 at 7:00 am

This is the nineteenth in the series of “20 Questions with Wally Rhines”

When I moved from the semiconductor industry to Mentor, I expected most of my technology and business experience to apply similarly in EDA software. To some extent, that was correct. But there was a fundamental difference that required a change in thinking. Product inventory, especially for semiconductors, must be minimized because it has both real and accounting value. We used to say that semiconductor finished goods are like fish; if you keep them too long, they begin to smell. Software inventory doesn’t even exist. When the order is placed, the actual copy of the software is quickly generated and shipped.


EDA customers are aware of this “software inventory” phenomenon. There is no deadline for purchases that takes into account the lead time for manufacturing, as there is when ordering semiconductor components; an EDA order placed near the end of a quarter can be filled within that quarter. Negotiations for large EDA software purchase commitments tend to drag on until near the end of the quarter when customers suspect they will get the best deal. To counter that, EDA companies provide incentives, or other approaches, to minimize the last minute pressure.

If negotiated terms are not satisfactory, why don’t the EDA companies just let the orders slide into the next quarter? Sometimes they do. But, unlike the semiconductor industry (or any other manufacturing industry), EDA software is between 90% and 100% gross margin. Profit is therefore asymetric. In a semiconductor business, a dollar of cost reduction improves profit by one dollar and a dollar of incremental revenue increases profit by typically 35 to 55 cents. So cost reduction has a larger impact on profitability than revenue growth. In the EDA industry (or most software businesses), a dollar of cost reduction has nearly the same profit impact as a dollar of incremental revenue. The conclusion: working on incremental revenue growth is just as productive, and a lot more pleasant, than working on cost reduction. In addition, a 5% miss in revenue produces about a 25% miss in profit (if the company’s operating profit is normally 20%) so it’s very disturbing to shareholders when an EDA company misses its revenue forecast because the accompanying profit miss will be large.

During the last decade or so, another profit leverage phenomenon has been important. With interest rates at very low levels, acquisition of another profitable company using cash is very accretive to earnings, i.e. cash that is sitting on the balance sheet collecting very little interest now becomes a profit generating asset that increases overall earnings for the acquiring company. A variety of industry analysts and investors have even taken the position that a company that doesn’t fully utilize its borrowing power is under-utilizing a corporate asset. That ignores the risk element associated with borrowing but it has encouraged lots of acquisitions, especially in high technology. Although there has been some consolidation in the semiconductor industry, the primary change has been a higher degree of specialization. Companies like TI that once produced nearly every type of semiconductor and very little profit now produce primarily analog and power devices and consistently among the highest profit in the semiconductor industry. Similarly for NXP moved from a broad mix of products to specialization in automotive and security components.

For EDA, there seems to be something stable about oligopolies. In the 1970’s, Computervision, Calma and Applicon had the largest combined share of the computer automated design industry. In the 1980’s, it was Daisy, Mentor and Valid. In the last three decades, it’s been Mentor, Cadence and Synopsys. The “Big Three” of EDA in recent years have had a combined market share between 70 and 85% and most of that time it was 75% plus or minus five percent. One reason that may have led to this phenomenon is that EDA tools are very sticky; there is typically a de facto industry standard for each specialization, like logic synthesis, physical verification, design for test, etc. It’s very hard to spend enough R&D and marketing cost to displace the number one provider of each of these types of software so changes occur mostly when there are technical discontinuities that absolutely force adoption of new methodologies and design tools. I suspect that most EDA companies make most of their profit from the software products where they are number one in a design segment (GSEDA analyzed nearly seventy such segments for the industry that are over $1 million per year in revenue).

Finally, I find it interesting to look at the EDA cost from the perspective of the companies purchasing the software. Frequently, I hear the complaint that EDA companies never lower the price of their software while semiconductor companies have to reduce the cost per transistor of their chips by more than 30% per year. To analyze this complaint, we used published data to look at the “learning curves” for semiconductors and EDA software (see Figure One).

A semiconductor learning curve is a plot of the log of cost (or price) per transistor on the vertical axis and the log of the cumulative number of transistors ever produced on the horizontal axis. For free markets like semiconductors, the result is a straight line, presumably forever. Semiconductor industry analysts publish data on the number of transistors produced each year as well as the revenue of the industry. The Electronic System Design Alliance publishes the revenue of the EDA industry. When you plot the semiconductor industry revenue per transistor and the EDA industry revenue per transistor on a learning curve, as in Figure One, the straight lines are exactly parallel. That means that the EDA software cost per transistor is decreasing at the same rate as the semiconductor revenue per transistor. Is that a surprise? It shouldn’t be. Just as the semiconductor industry has spent 14% of its revenue on R&D for more than the last thirty years, it has spent 2% of its revenue on EDA software for nearly twenty-five years. If EDA companies failed to reduce their price per transistor at the same rate that the semiconductor industry must reduce its cost per transistor, then EDA would become a larger percentage of revenue for the semiconductor companies, exceeding the 2% average and forcing reduction of other semiconductor input costs.

It turns out that the total semiconductor ecosystem behaves similarly, reducing costs to provide better products and a six order of magnitude decrease in the cost per transistor over the last thirty-five years.

The 20 Questions with Wally Rhines Series


How to Increase Energy Efficiency for IoT SoC?

How to Increase Energy Efficiency for IoT SoC?
by Eric Esteve on 10-11-2018 at 12:00 pm

If you have read the white paper recently launched by Dolphin, “New Power Management IP Solution from Dolphin Integration can dramatically increase SoC Energy Efficiency”, you should already know about the theory. This is a good basis to go further and discover some real-life examples, like Bluetooth Low Energy (BLE) chip in GF 22FDX. Because 22nm is an advanced node compared with 90nm or 65nm for example, the circuit targeting 22nm benefits from a decrease of the dynamic power (at same frequency), but the leakage power will dramatically increase.

In fact, the leakage power increases exponentially with process shrink, as it clearly appears on the second picture below. If you want your SoC to get the maximum benefit of this advanced node to target an IoT application, you will have to seriously deal with the power consumption associated with leakage.

The webinar will address two aspects of power efficiency as described on this picture from STMicroelectronics (“© STMicroelectronics. Used with permission.”): Silicon technology, by targeting FDSOI, and system power management.


Just a remark about the picture below, describing the evolution of the power consumption (active and leakage) in respect with the technology node, for 90nm to 22nm. If you read too quickly, you may think that the dynamic power also increases with process shrink. This is true in Watt per cm[SUP]2[/SUP], but don’t forget the shrink! Your SoC area in 22nm is by far smaller than in 65nm or 90nm, and the result of the equation Area*Power/cm[SUP]2[/SUP] is clearly better when you integrate in 22nm the same gates amount than in 90nm. If you want to increase the battery lifetime of your IoT system, your main concern is to reduce the leakage power by using the proper techniques, described in this webinar.


Dolphin presents various data for typical current consumption of a 2.4 GHz RF BLE when active (Rx/Tx/Processing/Stand By), or in sleep/deep sleep mode and calculate the duty cycle for a specific application: 99.966%. This means that applying power management techniques to reduce the leakage power will be efficient 99.966% of the time, so the designer should do it. This is true for this BLE RF IC example, as well as for IoT sensors, as they typically spend most of their time in sleep.

Improving SoC energy efficiency is clearly a must for IoT systems and any battery-powered devices. The first and obvious reason is to extend the system lifetime (without battery replacement), not by weeks, but really by years. In many cases, the system will be in a place where you just can’t easily replace the battery (think about surveillance cameras).

Other reasons are pushing for maximum energy efficiency, like cost, heat dissipation or scaling. A smaller size battery should allow packaging the system in a cheaper box. Reducing heat dissipation can be important when you want your product to fit with a specific form factor. Or you can decide to keep same battery life but integrate more features in your system…

In this webinar you will learn the various solutions to provide energy-efficient SoC. The designer should start at the architecture level and define a power friendly architecture. This is the first action to take: SoC architecture optimization is providing the highest impact, as 70% of power reduction comes from decisions taken at architecture level. The remaining 30% being equally split between Synthesis, RTL design and Place&Route.

At architecture level, you first define clock domains (to later apply clock gating), frequency domains and frequency stepping. Then you define the multi-supply voltage, global power gating and later coarse grain power gating. Power gating is a commonly used circuit technique to remove leakage by turning off the supply voltage of unused circuits. You can also apply I/O power gating, able to provide I/O power consumption reduction by one order of magnitude in advanced technologies where the leakage power is dominating.

As today’s SoCs require advanced power management capabilities like dynamic voltage and frequency scaling (DVFS), you will learn about these techniques and how to implement it to also minimize active power consumption.

The emerging connected systems are most frequently battery-powered and the goal is to design a system able to survive with, for example, a coin cell battery, not for days or even months, but for years. If you dig, you realize that numerous IoT (and communicating) applications, such as BLE, Zigbee…, have an activity rate (duty cycle) such that the power consumption in sleep mode dominates the overall current drawn by the SoC. To reach this target, the designer needs to carefully consider the power network architecture, clock and frequency domains at design start and wisely implement power management IP in the SoC.

Dolphin will hold a one hour live webinar “How to increase battery lifetime of IoT applications when designing an energy-efficient SoC in advanced nodes?” on October 16 (for Europe or Asia), at 10:00 AM CEST, or on October 25 for Americas, at 9:00 AM PDT. This webinar targets the SoC designers wanting to learn about the various Power Management techniques.

You have to register prior to attend, by going here.

By Eric Esteve from IPnest


One Less Reason to Delay that Venture

One Less Reason to Delay that Venture
by Bernard Murphy on 10-11-2018 at 7:00 am

Many of us dream about the wonderful widget we could build that would revolutionize our homes, parking, health, gaming, factories or whatever domain gets our creative juices surging, but how many of us take it the next step? Even when you’re ready to live on your savings, prototypes can be expensive and royalties add to the pain. Fortunately, ARM have extended their DesignStart program to FPGAs, in collaboration with Xilinx, eliminating cost and royalties for M0 and M3 cores. OK, I’m sure these are folded somehow into device pricing but that’s invisible to you and your savings will last longer than they would if you had to pony up ASIC NREs and license fees or even a conventional FPGA plus license fees.

ARM have been running the DesignStart program since 2010 with a lot of success – over 3,000 downloads and 300 new licensees in the last year alone. Free stuff really does attract a following. The appeal on an FPGA platform is obvious to most of us. At around $70 for a Spartan 6 bread-boardable module, you wouldn’t have a hard time defending that this could easily be absorbed in the family budget. Knowing that you can now get a Cortex M1 (the FPGA-optimized version of the M0) or an M3 – both as soft IP – makes this even more appealing. The most expensive part of your initial effort may be the Keil MDK, though ARM do offer a 90-day eval on that software (maybe you can develop your software really quickly 🙁).

Of course a CPU plus your custom logic may not be enough to meet your needs. Maybe you need some serious multi-processing capability and perhaps wireless capability, so now you’re looking at a Zynq or even a Zynq Ultrscale device. These definitely cost more than $70, but still a lot less than an ASIC NRE. Sorry, the DesignStart program doesn’t extend to A-class or R-class cores, but it still extends to the M1 and M3 cores on those platforms if you want heterogenous processing. So you can prototype your ultra-low power product with high-performance and wireless connectivity when you need it.

The motivation behind this introduction seems fairly obvious. For ARM this is another way to further push ubiquity. Why even think about RISC-V if you can start with a well-known and widely supported core, supported by a massive ecosystem, at essentially no added cost and no additional effort? This should further extend the ARM footprint and network effect in a much more elegant and appealing manner than an earlier and rather embarrassing counter to RISC-V. Meanwhile Xilinx gets to sell more devices with this attractive profile; what’s to lose?

ARM also offers very reasonably priced Arty development boards (based on Artix devices) for makers and hobbyists. These will get you up and running with drag and drop programming, debug support and lots of other goodies. So you really can get all the basics you need to build that first prototype at minimal cost (apart from the Keil MDK).

Of course if you want to get beyond an initial prototype, you are going to have to buy more FPGAs and maybe upgrade to those higher-end FPGAs (but you can carry across the software you developed to those bigger devices). So you may eventually still need a family loan, or Kickstarter funding or a second mortgage. But no-one ever said that getting rich was easy…