RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Accurate Power Sooner

Accurate Power Sooner
by Bernard Murphy on 06-20-2017 at 7:00 am

Synopsys PrimeTime PX, popularly known as PT-PX, is widely recognized as the gold standard for power signoff. Calculation is based on a final gate-level netlist reflecting final gate selections and either approximate interconnect parasitics or final parasitics based on the post-layout netlist. The only way to get more accurate power values is to measure the real thing on silicon after fabrication.


By nature, this kind of analysis starts very late in the design flow because you need a near-implementation or post-implementation netlist, and takes quite a long time to perform because you must run gate-level simulations to generate activity data, which can take days to weeks to generate. When signoff is a final confirmation that power is indeed in spec this is OK, but cycle times like this are definitely not OK if you find you missed the power budget. Short of planning for another spin, options until now were limited. You could go back to RTL to fix the microarchitecture, where you can use SpyGlass Power, a great tool for approximate estimation and optimization earlier in the design flow but implying an implementation restart which would delay tapeout significantly.

What you really need here is an intermediate solution between early RTL estimation and final PT-PX signoff accuracy, something that is still very accurate and based on gate-level netlists, but which you can get to much more quickly. This would enable earlier checks at near-signoff accuracy, allowing time for less disruptive corrective actions where needed. This is what Synopsys PowerReplay (a separate product) can offer, together with PT-PX. Synopsys launched this solution in May of this year; a webinar presented by Vaishnav Gorur (PMM) and Chun Chan (R&D director) provides details.


PowerReplay works together with PT-PX, which still does the power estimation based on the same pre- or post-layout netlist, together with SDF if available. What PowerReplay provides in this flow is the ability to short-circuit all the gate-level simulation setup and a good deal of the simulation run-time, while still generating the activity data you need. It does this by starting from an available RTL-based FSDB, from which it auto-maps the stimulus onto the gate-level netlist. The mapping is improved if the SVF file from synthesis is supplied as an additional input. This results in more accurate power numbers downstream.

You can also do activity analysis in PowerReplay to narrow down time windows you want to use in power estimation. While highest activity doesn’t necessarily imply highest power, high activity along with some knowledge of the design should help you localize best windows for worst-case power. In addition you can localize analysis to look only at certain blocks. And, as you might expect, you can run these analyses in parallel. PowerReplay runs simulation on the gate-level netlist using the stimulus from the RTL FSDB, restricting simulation to your selected time windows and design scope. Put this all together and you’ve gone from a long, grinding gate-level simulation and power estimation starting from time 0 to a much faster turn-time analysis requiring minimal setup and delivering almost the same accuracy.

Chun talked about a couple of customer case studies. In one case, the customer compared the PowerReplay flow with their existing signoff flow. They found that within the windows they selected for analysis, the PowerReplay flow results were with 2% of those for the reference flow. Also, where the reference flow took 7 days to complete, the PowerReplay-based analysis completed in 8 hours. In a second customer study, there was again a big reduction in run time thanks to the parallel analysis flow, and accuracy was within 2.5% of the reference flow. Across multiple customers Vaishnav said they have seen accuracy within 5% of PT-PX signoff numbers.

A couple of interesting questions came up in the Q&A. One was whether PowerReplay sims take gate delays into account. The answer is yes, as long as you supply SDF. Taking this into account is important for accurate peak power analysis which would otherwise be skewed. Another good question was how much earlier in the flow customers had been able to run these analyses. Vaishnav said that this flow can be run on blocks, so you don’t have to wait for the full chip, which means that you can start getting accurate block-estimates typically weeks to months ahead of full-chip analysis.

You can replay the webinar HERE.


Is AI the end of jobs?

Is AI the end of jobs?
by Vivek Wadhwa on 06-19-2017 at 12:00 pm

Artificial Intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it “touches every single one of our main projects, ranging from search to photos to ads … everything we do … it definitely surprised me, even though I was sitting right there.”

The long-promised AI, the stuff we’ve seen in science fiction, is coming and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from “The Jetsons” and R2-D2 of “Star Wars.”

This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion — and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But if we do, we will be the losers. As I discussed in my new book, “Driver in the Driverless Car,” technology is now advancing on an exponential curve and making science fiction a reality. We can’t stop it. All we can do is to understand it and use it to better ourselves — and humanity.

Rosie and R2-D2 may be on their way but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence — but would never be mistaken for a human. They can, however, do a better job on a very specific range of tasks than humans can. I couldn’t, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night.

Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentine’s Day, she might make a snarky comment but couldn’t venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldn’t help. That is where the human element comes in and where the opportunities are for us to benefit from AI — and stay employed.

In his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,” chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBM’s Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required … but so is courage.

Kasparov wrote: “When I sat across from Deep Blue twenty years ago I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer — or even playing chess.”


In other words, we better get used to it and ride the wave.

Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage.

AI is the next step in improving our cognitive functions and decision-making.

Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these don’t make us any dumber than encyclopedias, phone books and librarians did.

A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didn’t cause human chess players to become less capable — the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way.

As Kasparov explains: “It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. … What happens when the early influential coach is a computer? The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. … The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.”

Perhaps this is the greatest benefit that AI will bring — humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative.

For more, read my book,Driver in the Driverless Car, follow me on Twitter: @wadhwa, and visit my website: www.wadhwa.com


Design Deconstruction

Design Deconstruction
by Bernard Murphy on 06-19-2017 at 7:00 am

It is self-evident that large systems of any type would not be possible without hierarchical design. Decomposing a large system objective into subsystems, and subsystems of subsystems, has multiple benefits. Smaller subsystems can be more easily understood and better tested when built, robust 3[SUP]rd[/SUP] party alternatives may be available for some subsystems, large systems can be partitioned among multiple design teams and complete system implementation can (in principle) be reduced to assembly of finished or nearly finished subsystems.


But what makes for an optimal implementation doesn’t always align well with the partitioning that best served the purposes of logic design. Physical design teams have known this for a long time and have driven physical tool vendors to add many enhancements in support of:

· Adjusting logic partitioning to better balance sizes for physical units
· While also minimizing inter-block routing to reduce demand on top-level routing resources
· Reducing delays in long inter-block signal routes with block feedthrus
· Duplicating high-fanout ports or even logic to reduce congestion

These methods worked well and still do, to some extent, but they paper over a rather obvious problem. The burden of resolving mismatches between logic and physical structure falls entirely on the physical design team yet the line between logical and physical design is more blurred than it used to be, increasing the likelihood of iteration between these phases and therefore repeated effort and delay in re-discovering optimal implementation strategies on each iteration. In a climate of aggressive shift-left to minimize time to market and increasing cost-sensitivity disallowing any sub-optimal compromises, this approach to optimizing the logic/implementation divide is not moving in the right direction.

For those who don’t understand why logical and physical design have become so entangled, here’s a brief recap of a few examples. I’ve mentioned before the effects of low-power structure. Similar power islands may appear in widely separated parts of the logic hierarchy, yet there are obvious area and PG routing benefits to combining such logic into a single power island. But this restructuring can’t simply be moved to physical design, because changes like this must also be reflected in the RTL netlist and power intent for functional/power verification. Or think about MBIST insertion. It would be impossibly expensive to require one MBIST controller per memory in a design containing thousands of memories, so controllers are shared between memories. But the best sharing strategy depends heavily on the floorplan, and changing the strategy obviously affects the RTL netlist and DFT verification. Or think of a safety-critical design in which a better implementation suggests duplicating some logic. If that logic has been fault-injection tested, it’s not clear to me that it can simply be duplicated in implementation without being re-verified in fault-testing.


The obvious solution is to hand over more of this “coarse-grained” restructuring to logic design, leaving fine-grained tuning to the implementation team. This view has already gained traction in several design houses. The challenge though is that manually restructuring an RTL netlist can be very expensive in engineering resource and in time. Unfortunately, hierarchy in this case is not our friend. Moving blocks around a hierarchy looks easy in principle but maintaining all the right connections (rubber-banding connections) while not accidentally making incorrect connections (through naming collisions for example) is a lot harder, especially in modern SoC designs where some blocks you want to move may have hundreds or even thousands of connections.

Which makes this task a natural for automation. The objective is complex but mechanical, in restructuring (as one example) requiring large numbers of ports and nets to be added, changed or deleted, in a systematic way avoiding accidental wire-ORs. Intelligent decisions need to be made on whether fanins/fanouts should be consolidated inside a block or outside (there should be some user control over this) and there should be strategies for handling tieoffs and opens. And at the end of it all, the modified netlist should still be human-readable. You would also like to see some level of changes reflected in constraint files like UPF and SDC. Probably these still would need designer cleanup to accurately reflect modified intent, but they should be a good running start.


Sounds like magic? DeFacto offers these capabilities as a part of their STAR platform. In fact, they have been doing this in production for a while and cite some fairly compelling benchmark stats to support their claims. In one example a subsystem containing about 4K block instances, manual restructuring by a customer took 12 man-months followed by 3 man-months to verify/correct the changed design against the original. Using STAR, the same restructuring was completed in 1.5 hours (3.5 hours for bit-blasted nets) and verification was an error-free run through equivalence checking. This flow has also been used to restructure gate-level netlists up to 10M instances (65M gates).

There’s the usual problem getting customer testimonials but a couple of organizations stepped up. Socionext in Japan stated that they saved up to 3% of die area by manipulating one of their designs in gates using STAR. They added that if they had pushed harder, they felt they could have got up to 10% area saving, which is a pretty massive claim. Marvell didn’t share stats but they did say that they had built a cost-effective IP integration and design restructuring system for large SoC designs at RTL. I happen to know that Marvell have been working on solutions of this type for years, so it’s impressive that they finally settled on STAR.

I mentioned restructuring was a part of the STAR platform. More generally this platform can be used to build sub-system and SoC top-levels, to inject control fabrics (such as DFT or power management) on top of an existing netlist or to seamlessly update memory instances, for improved power or performance, though auto-generated wrappers. The platform supports a wide variety of design inputs – RTL of all flavors, IP-XACT, Excel, JSON (believe it or not) and more. It’s also scriptable through Tcl, Python and other languages. You can learn more from DeFacto’s webinar on restructuring HERE.


TSMC @ #54DAC!

TSMC @ #54DAC!
by Daniel Nenni on 06-16-2017 at 9:00 am

TSMC has been an ardent supporter of DAC for the last 18 years which has brought in the other foundries because, as the industry leader, wherever TSMC goes the other foundries naturally follow. The exception of course is Intel Custom Foundry because they march to the beat of a different drummer, if you know what I mean. The CoFluent group of Intel does exhibit at DAC, yes Intel became an EDA company with the purchase of French ESL company CoFluent in 2011. I’m not sure who their customers are but probably not the growing number of companies that compete with Intel due to their acquisitions but I digress…

This year TSMC has some notable speakers that you may want to catch:

TECHNICAL PANEL
Minimizing IC Power Consumption: Top Down or Bottom up Design Methodology. What is the Starting Point?
Moderator – John Blyler – Electronic Design Mag.
Aditya Mukherjee – Microsoft
Tim Saxe – QuickLogic Corp.
Abhishek Ranjan – Mentor, A Siemens Business
Ronald Martino – NXP Semiconductors
Lluis Paris – TSMC
Jon Adams – ON Semiconductor

Lluis Paris is Director of World Wide IP Alliance at TSMC and a recognized IP expert, in fact he is the top IP person I know and that is saying a lot because I know many. Lluis came to TSMC from the Emerging Memory Technology acquisition where he was COO. Not only does Lluis have a PhD in Microelectronics, he also has an MBA so he is definitely worth your time. I also know Tim Saxe from my Zycad and GateField FPGA days. Tim has a PhD in Electrical Engineering from Stanford and is a straight shooter with YEARS and YEARS of experience so he is definitely worth listening to.

The panel is on Monday at 3:30pm in the Convention Center, Ballroom G and I will be at this one. Lluis is also participating in the Mentor Booth Panel on the Impact of ISO 26262 on the fabless ecosystem on Tuesday at 5pm.

Tom Quan is also one of my favorite TSMC presenters. Tom has been at TSMC for ten years and before that he was an EDA staple. Prior to EDA, he was a Design Engineer at Intel. Tom is an AMS expert so that is where you can usually find him. This year he is quite busy:

LUNCH PANELS
Cadence: High Performance Digital Design at 7nm
Tuesday 12:00pm at the Convention Center – Ballroom B & C

Synopsys: Custom Compiler in 7nm
Tuesday 11:30am Hilton Hotel, 6th Floor, Austin Grand Ballroom H

BOOTH PRESENTATIONS
Synopsys: Design Enablement for HPC, Mobile, IoT and Automotive Applications
Monday 2:00pm

Cadence: TSMC Automotive Design Enablement Platform
Tom Quan Tuesday 2:00pm, Chek-San Leong Tuesday at 4:00, and Captain Liu Wednesday 1:30pm.

Captain Liu spent his career in EDA (Springsoft/Synopsys) before coming to TSMC two years ago. Captain is also busy at DAC:

BOOTH PRESENTATIONS
Cadence: TSMC-Cadence Collaboration for Digital Design Enablement at 7nm Monday at 11:30am and 1pm.
Synopsys: Design Enablement for HPC, Mobile, IoT and Automotive Applications Tuesday at 1:30pm.
ANSYS: Tool Flow Verification Monday at 2:45pm.

My good friend Willy Chen will be on the Synopsys breakfast panel: ARM, Synopsys and TSMC collaboration to enable high performance design with the latest processors and FinFET processes, including 7nm Monday morning. I will be at that one as well.

Last but not least Libby Aston and Chek-San Leong will be presenting Design Enablement for HPC, Mobile, IoT and Automotive Applicationsat the Chip Estimate booth on Tuesday and Wednesday at 1:30pm respectively.

You can see all of the DAC events HERE.

Please notice that TSMC 7nm is all over DAC this year meaning that we will see production chips in 2018, absolutely! Exciting times, I hope to see you there!


Don’t Miss “The IP Paradox” Panel @ #54 DAC!

Don’t Miss “The IP Paradox” Panel @ #54 DAC!
by Eric Esteve on 06-15-2017 at 12:00 pm

Despite the strong consolidation in the semiconductor industry, the Design IP market is still growing: from $3 billion in 2015 to $3.4 billion in 2016. That’s why the DAC IP Committee has organized this panel, titled “The IP Paradox: Growing Business Despite Consolidations” (you can see more on the events page: https://dac.com/events).

If you monitor the EDA&IP quarterly results shared by the ESD Alliance like we do, you can’t miss the fact that the Design IP Category has become the largest in 2016, growing by 10% YoY in spite of the semiconductor industry affected by consolidation and exponential SoC development cost. That’s a fact and that’s also a paradox!

It sounds like a paradox for two main reasons.

The first is the consolidation currently in action in the semiconductor industry. The most prominent example is Broadcom Ltd., a company created by the merge of Avago and Broadcom. Broadcom had already acquired PLX Technology and NetLogic before the merge, when Avago had acquired in 2016 LSI Logic, a company already issued from the merged of Agere and LSI Logic. This lead to ask a valid question: What is the real impact of semiconductor consolidation on the IC design starts? We could think, intuitively, that the merge of company A and company B necessarily lead to less design starts from AB than from A and B separately…

The second reason is linked with the exponential cost of the SoC design targeting the most advanced technology nodes, leading to a strong decrease in design starts for these nodes as shown by Synopsys in 2016 (see the chart below). We can argue that the number of design starts is growing on the mature nodes, like 180 nm, the mainstream node for IoT related sensor, and this is true. But how many IP can be integrated in an IC in 180 nm? A cheap MCU core, some memory, a standard cell library and that’s it. But the design starts in advanced nodes integrate many more IP (in the 100’s range), and these IP are much more expansive. Less design starts should lead to lower IP revenue, but this is not the case as the IP license only revenue, without royalty, has grown by 9% in 2016.

So, what are the market forces that are fueling IP business growth? IP license price increases? Number of IP in an SoC? Make vs Buy trends? For anybody who is part of the IP ecosystem, buyer, vendor, design service or foundry, this question is not just theoretical! You need to know the market dynamics affecting your business or your customer’s business. That’s why we have gather together several of the best experts, representative of these various field:

Elias Lozano – Open-Silicon, Inc., San Jose, CA
John Koeter – Synopsys, Inc., Mountain View, CA
Sujoy Chakravarty – SilabTech, Bangalore, India
Chengyu Zhu – Semiconductor Manufacturing International Corp., San Jose, CA
Sanjive Agarwala – Texas Instruments, Inc., Dallas, TX

Open Silicon is a design service company, an IP buyer to realize SoC design for his customer, but also an IP vendor (Interlaken IP controller for example). Synopsys is the #2 Design IP vendor, active in many IP segments with revenues from IP reaching $450 million as calculated by IPnest (Design IP Report 2016). SilabTech is much, much smaller than Synopsys, but a very dynamic IP vendor specialized in SerDes and PHY IP. It will be important to get the point of view from SMIC, the Chinese foundry who has to procure IP to his customer, designed internally or sourced to IP vendors. TI is a very good example of the complexity of the IP business model: TI still develops IP internally, but is also one of the main IP buyer on the market.

When we talk about “IP ecosystem”, you can see that it’s a reality when looking at these panelists. To complete this IP ecosystem, it was important to have a moderator who is literally at the hearth of this ecosystem, Dan Nenni, the founder of Semiwiki. If you read this post, that’s because, in Semiwiki, we publish several IP related articles every week since 2011, if you count that’s make 1,000 or more articles completely dedicated to IP.

Let me just add that I have been honored when the DAC IP Committee has accepted this idea (The IP Paradox) that I have proposed, and that I have organized. I am sure that it will be a dynamic panel, helping the audience to better understand the IP market trends and dynamics!

https://dac.com/events
Eric Esteve from IPnest


DAC 2017 Review

DAC 2017 Review
by Bernard Murphy on 06-15-2017 at 7:00 am

DAC is coming, next week, in beautiful downtown Austin at the Convention Center. I’ll be there Monday and Tuesday, running around the exhibit area. If you haven’t yet got your plane and hotel tickets, drop everything and start looking. I’m guessing this will be as popular as it always is, especially given the venue. I know of multiple parties: the Gary Smith EDA kickoff Sunday night, another Gary Smith party and Solido, Cadence Denali, Silvaco and Cliosoft are all hosting parties downtown.

On the off-chance you’re going to DAC to do something other than party, on keynotes the iconic Joe Costello kicks off Monday with a pitch on “IoT: Tales from the front-line”. Tuesday Chuck Grindstaff from Siemens PLM will talk about digital twins (no, not robot doppelgangers). Wednesday Tyson Tuttle from Silicon Labs will give his view on accelerating the IoT. And Thursday Rosalind Picard will talk about a very intriguing topic, emotion technology, and how this can potentially help people with autism, depression and other problems. I hope they post all the talks because I’ll probably miss them thanks to my schedule.

There’s a lot of focus on IoT, starting with keynotes, but also getting into security, a perennial and constantly evolving challenge. There’s also an interesting looking contest on FPGA-based IoT which should be a must-see for the maker-types among you. Beyond this there is a rich palette of topics of which I can touch on just a few that piqued my interest.

On Sunday, I see a fascinating-sounding workshop on design automation for cyber-physical systems with speakers from Texas A&M, National Taiwan Univ, GM, Technische Univ in Munich, Upenn, UCI and UCF. They’ll be talking about automotive and transportation systems, smart home, building and community, smart battery and energy systems, surveillance systems, cyber-physical biochips, and wearable devices. They’ll be looking at the unique challenges posed by cyber physical systems, certainly in power, performance security and so on, but also real-time operation, handling uncertainties in sensor readings and more.

There will also be a workshop on autonomous vehicles, avionics, transportation and robotics which goes by the catchy handle of AVATAR, though I didn’t notice James Cameron among the speakers. They plan to touch on how needs in these areas intersect or can intersect with EDA, which should make this a very interesting opportunity for EDA product strategists.

On Monday, you’ll find sessions on Security IP for the IoT, also a tutorial on security validation for SoCs. Can’t-miss for any new designs. There are also a couple of machine learning sessions in the morning. In the afternoon, there’s a standard tutorial topic on the future of SoC validation and debug. Standard topic, yes, but you really can’t afford to miss this if you have anything to do with V&V. There will also be a session on safe platforms which probably also is a can’t miss for anyone designing for automotive applications.

Tuesday I noticed a session on security analysis and defense and a very perturbing session called “patch your car like your phone – design for extensibility in automotive systems”. Yikes. In the afternoon, you’ll find a session on model-based design for medical devices (I hope there’s no suggestion those be patchable). Later there’s a topic on AI and CNNs which should be fun. Another interesting theme in the afternoon, for verification geeks like me, is on nearby advances and far frontiers in verification. Then there are more very interesting sessions on security, especially a likely contentious topic asking whether hardware security is making a difference.

Wednesday opens with several topics on cyber-physical systems, including hardware design and time control. There’s a very timely topic on how we should test cognitive systems, particularly since unsupervised learning is becoming so hot. There’s a topic that designers, architects and product managers who can’t make it to Austin will wish they could attend – “Is integration leaving less room for design innovation?”. Then there’s more on safety and advances in security and, for the truly ambitious, several talks on design methods for quantum computing.

Thursday has a session on security nuts and bolts, a deep-dive into neural networks, an intriguing session on microfluidics and approximate computing, pushing beyond deep learning into neuromorphic computing and a topically-inspired session on “making neural networks great again”.

Phew – this is just a sample. First, book your tickets, then check out the full agenda HERE.


The FPGA Business Just Got Interesting Again!

The FPGA Business Just Got Interesting Again!
by Daniel Nenni on 06-14-2017 at 8:00 am

FPGA’s have played an important role in the fabless semiconductor ecosystem which is why it has a full chapter in our book Fabless: The Transformation of the Semiconductor Industry. Along my career path I spent time at a start-up FPGA so I know how hard it is. I worked for GateField which was then acquired by FPGA pioneer Actel which was then acquired by MicroSemi. There are still GateField people at Microsemi and the architecture they developed is still in play so congratulations to them.

Intel acquiring Altera was a big blow to the mainstream FPGA industry which left me wondering who will step up and compete with Xilinx. As it turns out the answer is Achronix:

“2017 is a breakout growth year, which establishes Achronix as one of the fastest growing semiconductor companies in the world. We are experiencing strong customer demand for both our Speedster FPGAs as well as our newer Speedcore embedded FPGA in hardware accelerator applications. We are looking for new talent to complement a very strong core team to continue delivering highly innovative silicon and software products,” said Robert Blake, President and CEO, Achronix Semiconductor.

“Looking forward, we are entering a new high growth era where our customized core FPGA technology can accelerate a broad range of complex compute tasks in machine learning, artificial intelligence, software defined networks and 5G base stations.”


Honestly I was blown away by the numbers since I have known Achronix since they appeared on SemiWiki in April of 2011:Wanted: FPGA start-up! …Dead or Alive?

Coincidentally, I happen to know the Achronix VP of Marketing, Steve Mensor. Steve spent most of his career at Altera with another good friend of mine so that is how we are connected. I had a quick phone chat with Steve about this press release and found out Achronix will be at DAC next week for the first time so we will speak more then and dig into the technology for follow-up blogs. He did share a slide deck with me and I have to say I was impressed.

Steve’s presentation hit on two very hot topics in the semiconductor industry and that is embedded FPGAs and hardware acceleration (think artificial intelligence). SemiWiki’s readership was fairly predictable until IoT hit us in 2014. Now we have a very diverse readership from many domains I don’t recognize. We are seeing the same thing with AI as it touches almost all of the markets we cover including Mobile, IoT, Automotive, and Security.

According to Steve, in addition to Speedster standalone FPGAs and Speedcore eFPGAs they are working on a new product they call Chiplets which is a die for 2.5D package integration.

If you are attending #54DAC next week you can meet Steve and the Achronix team at booth #1821 which is across the aisle from SMIC.

About Achronix Semiconductor Corporation
Achronix is a privately held, fabless semiconductor corporation based in Santa Clara, California. The Company developed its FPGA technology which is the basis of the Speedster22i FPGAs and Speedcore eFPGA technology. All Achronix FPGA products are supported by its ACE design tools that include integrated support for Synopsys (NASDAQ:SNPS) Synplify Pro. The company has sales offices and representatives in the United States, Europe, and China, and has a research and design office in Bangalore, India. Find out more at https://www.achronix.com.


Worldwide Design IP Revenue Grew 13.1% in 2016, According to Final Results by IPnest

Worldwide Design IP Revenue Grew 13.1% in 2016, According to Final Results by IPnest
by Eric Esteve on 06-14-2017 at 7:00 am

Despite the strong consolidation in the semiconductor industry, the Design IP market is going well, very well with YoY growth of 13.1% in 2016, according with the Design IP Report from IPnest. ARM Group of Softbank (previously known as ARM Holding) is again the strong #1 with IP revenues (licenses plus royalties) of $1,647 million and 48.4% market share, followed by Synopsys with about $450 million and 13% share. Imagination technologies is still #3, despite IP revenues for 2016 decreasing by more than 20% and Cadence is #4, while the IP revenues have also decreased in 2016.

IPnest has defined 11 categories and 3 groups, Processor, Physical and Digital IP, ranking IP vendors in these 11 categories. The processor IP group is the largest, with about 60% of revenues from design IP. The group is split into Microprocessor (CPU), Digital Signal Processors (DSP) and Graphic and Image Processors (GPU and ISP) categories. There are strong disparities between these 3 categories as the weight of the CPU category is about 10x the DSP and 5x the GPU/ISP.

ARM is obviously the strong #1 in the CPU category, and will probably keep this position forever, due to the royalty mechanism… and the successful company strategy of diversification outside of the mobile phone, namely in storage (95% market share), wearable (90%), networking (15%) or embedded intelligence (25%). In this CPU category, Imagination Technologies (IMG) is #2 with their MIPS IP family and Synopsys #3 with ARC IP family. Now, if we consider the market dynamics in 2016 where IMG has seen MIPS IP revenues decrease by 10% while Synopsys has grown ARC IP revenues by 13.5%, it wouldn’t be surprising to see Synopsys becoming #2 in 2017. The good question is “who will consolidate MIPS IP revenues?”, as the product line is for sale since the beginning of 2017…

In the GPU/ISP category, ARM was #2 with MALI GPU IP just behind IMG in 2015, but has passed IMG in 2016. ARM has now 46.5% market share in GPU/ISP, for 35.8% for IMG in 2016. More than a brilliant success from ARM (the company has seen a 2.2% YoY with GPU in 2016), this change is sanctioning the deficient performance of IMG, with GPU IP revenue drop of 23.9% in 2016! As a side notice, this revenue drop has happened before than Apple, IMG’s best customer, has announced that they plan to internally develop the GPU IP to be integrated in their next application processor for smartphone…

To end up with a positive point, Verisilicon (after Vivante acquisition) is the #3 in this category, and we can expect the company to keep growing in the future, as their home market (China) is expected to explode during the next years.

The DSP IP category is leaded by Cadence (#1) and CEVA, and the #2 is doing extremely well in 2016, with 22.6% revenues growth. I had the opportunity to review the financial communication by quarter from CEVA for the last 3 or 4 years, and the most noticeable point if the company constant diversification out of the mobile phone. In the latest communication for Q1 2017, CEVA is announcing height new DSP IP licenses, all of them non-handset baseband applications!

The next group after processor is the Physical IP, including Wired Interface IP, SRAM memory compiler, Other memory compilers, Physical Libraries, Analog and Mixed-Signal and Wireless Interface IP, weighting slightly less than 30% of the total, but more than 50% of the license only revenues. The clear leader, Synopsys with 29% market share, is active in most of the categories and leader with 50%+ market share in the Wired Interface IP. This category is also the largest with more than $500 million in 2016 and IPnest has made extensive research work since 2009 on wired interface, including this 2010-2020 Survey/Forecast which can be found in the “Interface IP Survey”, the best seller from IPnest.

In the 2[SUP]nd[/SUP] part of this article, I will propose a detailed analysis for the Physical IP categories not covered today, as well as for the Digital IP group (Chip Infrastructure and Miscellaneous Digital).

If you’re interested by this “Design IP Report” released in May 2017, just contact me: eric.esteve@ip-nest.com .

Last but not least, I hope you will go to the DAC 2017 in Austin! With my colleagues of the IP committee we have prepared conferences and panel sessions, and we really expect you to take benefit of these four days to learn about the hot topics in design IP from Verification, Security for IoT to the IP Paradox panel (moderated by Dan Nenni!)…

I will dedicate a complete blog to this panel very soon.


Eric Esteve from IPnest


The Official SemiWiki #54DAC Party Guide!

The Official SemiWiki #54DAC Party Guide!
by Daniel Nenni on 06-13-2017 at 12:00 pm

With the premier conference for semiconductor design enablement just around the corner I would like to take this time and space to talk about what is really happening at the #54DAC and that would be the parties! Granted the DAC parties are nothing like we used to have in the 1980s and the 1990s since we have matured as an industry but there is still fun to be had.

First I would like to mention my DAC speaking engagement for this year which is at the Minalogic Showcase on Tuesday from 4-6pm in room 8C, second floor mezzanine at the Austin Convention Center. You can read more about Minalogic HERE but certainly you will recognize the president’s name Philippe Magarshack. In my talk I will be covering Semiconductors: Past, Present, and Future which will be a retrospective based on my professional experience, writings on SemiWiki.com, and the analytics behind the writing. Really there are two premises: First, you have to thoroughly understand how you got to where you are today before you can plan for where you are going to tomorrow. Second, the pen truly is more powerful than the sword especially the analytics behind the pen.

The other presenters include:
Eric Mottin, Microelectronics’ Director – Minalogic
Presentation of Minalogic & EDA Members & Specificities
Firas Mohamed, General Manager – Silvaco France
“Fostering Innovation in TCAD, EDA & IP”
Thierry Collette, VP, Architecture, IC Design & Embedded Software Division, Leti
Overview of Design & EDA Challenges for SOI Technology
Ramy Iskander, CEO – Intento Design
Accelerated Constraint-Driven Analog Design and Migration at Functional Level
Isabelle Geday, CEO – Magillem
Integrating Specification, Design and Documentation to Optimize SoC Design Cycle and Legacy Reuse
Jean-Marc Talbot, Senior Engineering Director DSM/AMS – Mentor Graphics

The event is free but space is limited so please register HERE in advance. Following the speakers will be a networking reception which to me qualifies as a DAC party. My beautiful wife and I will be bringing copies of our book “Fabless: The Transformation of the Semiconductor Industry” if you would like a copy. Be sure and ask me to sign it and pretend I am important in front of my wife.

Now to the parties:

The first one of course is the traditional Sunday night Gary Smith EDA Kickoff which is from 5-5:30pm in Ballroom D of the convention center and is followed immediately by the official DAC Reception on the 4[SUP]th[/SUP] floor foyer that runs from 5:30-7pm. The Gary Smith Kickoff generally fills up so you should get there early if possible. DAC also hosts receptions on Monday, Tuesday, Wednesday, and Thursday evenings (same place and time). You can see a list of all DAC events HERE.

On Monday night the real parties start. After the DAC welcome reception (6:00pm – 7:00pm Trinity St. Foyer) my beautiful wife and I will be at the Gary Smith Benefit Party(7-10pm) at the Speakeasy on 412 Congress Ave in parallel with the first annual Solido Rooftop Party on the Speakeasy rooftop (7-11pm).

On Tuesday night it gets a bit crazy. You can choose from the Cadence Denali Party(8:00pm) at the Palm Door on Sixth 508 E 6th Street and/or the Stars of IP Party(7pm to 12am) again at the Speakeasy. My beautiful wife and I will be going to the Cliosoft Party(7-11pm) at Micheladas Café y Cantina on 333 E 2[SUP]nd[/SUP] Street. Why you ask? Because we received a personal invitation, they encourage you to bring spouses/friends, and we really like margaritas. Okay mostly because we really like margaritas and Michelas serves the best ones in town.

I’m sure I have missed some of the DAC parties so please add them in the comments section and I can put them on the blog as they come in.

Safe travels and we look forward to seeing you there!

Also Read

Scaling Enterprise Potential with ClioSoft’s designHUB platform

Attending DAC in Austin for Free

ClioSoft Crushes it in 2016!