NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

CEO Interview: Amit Gupta of Solido Design

CEO Interview: Amit Gupta of Solido Design
by Daniel Nenni on 02-13-2017 at 7:00 am

Solido Design Automation is rapidly making a name for itself in EDA. Amit Gupta is founder and CEO of Solido Design Automation, based in Saskatoon, Canada. You should also know that Solido is one of the founding members of SemiWiki.com. In the last six years we have published 44 Solido related blogs that have racked up more than 200,000 page views. I recently had the opportunity to have a New Year’s chat with Amit for a Solido update. Below is a throwback graphic I used in one of my early Solido blogs and it is still one of my favorites because it is so true.

Tell us about Solido Design Automation
I founded Solido in 2005. We focus on providing variation-aware design software for custom IC designers. Our flagship product, Variation Designer, was launched in 2007 with version 4 available today. Product development and customer applications are both based in Saskatoon, Canada, and we have sales offices around the world.

We currently have a team of about 60 people working to create, provide, maintain, and support products for custom IC designers. We have over 35 major customers working in memory design, standard cell library design, and analog/RF design. Solido’s software helps them meet industry and market demands by building designs with better power, better performance, better area, and better yield.

We have 15 patents protecting our core machine learning technologies, enabling designers to get the most accurate results in the fastest time.

What makes Solido unique?
There are a few aspects that make Solido really unique:

First, we invest heavily in machine learning technologies to provide disruptive solutions to our customers in terms of speed, accuracy, capacity, and verifiability.

Second, we invest heavily in user experience design experts to provide an unmatched product-user interface that is easy to use and deploy quickly across an organization.

Third, we invest heavily in our customer applications team to ensure our world-wide user base of over 2000 people have great support and get the full benefits of Solido software.

The combination of these investments has given us the world leading position in variation-aware design software.

Why should designers be concerned with design variation?
There are some big semiconductor trends happening right now. We’re seeing growth in many semiconductor segments: mobile, 5G networking, automotive, IoT and industrial IoT, and cloud computing.

This growth is driving chip complexity. There’s a need to move to advanced nodes, including advanced FinFET designs at 16-, 14-, 10-, and 7nm; FDSOI designs at 22- and 12nm; and low-power variant designs at both advanced and mature nodes at 28nm, 40nm, and 65nm.

To meet specifications and to stay competitive in designing the best-performing quality chip with low power, high performance, low area, and high yield, designers need to be able to do extensive SPICE simulations to account for all the potential design variation. Using brute-force PVT and Monte Carlo requires too much time and resources for full design coverage. Solido Variation Designer enables customers to get full design coverage in orders-of-magnitude fewer simulations than brute force.

What does Variation Designer allow the designer to do?
Solido Variation Designer uses machine learning algorithms that enable designers to reduce the number of simulations from 10 to one million times fewer simulations, while still achieving the accuracy of brute-force PVT and Monte Carlo analysis. As a result, our customers achieve full design coverage without having to compromise on accuracy, allowing them to get high performance, low area, low power, high yielding ICs and to stay competitive within these rapidly advancing semiconductor trends.

We’ve hit an inflection point between design challenges and a need for variation-aware design tools. With our machine learning technologies, we’re able to meet those challenges. In addition, our user interface allows customers to pick it up and implement it in their organizations quickly and efficiently.

2016 was a big year for Solido. What were some of the highlights?
2016 was a great year. We are now among the largest private electronic design automation (EDA) companies, achieving 50% revenue growth, again; which we’ve accomplished each year for the last 5 years. We were also recognized in Deloitte’s Technology Fast 50[SUP]TM[/SUP] program, for being among the fastest growing technology companies.

We’ve also been hiring very aggressively. Last year we increased our team from 30 to 50 people. This year, we will be more than doubling our team, to over 100 people. We’re actively hiring software developers and customer applications people to support our growing customer base and continue to build the world’s best product.

We’re really looking forward to 2017. Our software is being used by more designers and more companies, and we’ll be launching some exciting new products in 2017.

About Solido Design Automation
Solido Design Automation Inc. is a leading provider of variation-aware design software for high yield and performance IP and systems-on-a-chip (SOCs). Solido plays an essential role in de-risking the variation impacts associated with the move to advanced and low-power processes, providing design teams improved power, performance, area and yield for memory, standard cell, analog/RF, and custom digital design. Solido’s efficient software solutions address the exponentially increasing analysis required without compromising time-to-market. The privately held company is venture capital funded and has offices in the USA, Canada, Asia and Europe. For further information, visit www.solidodesign.comor call 306-382-4100.

Also Read:

CEO Interview: David Dutton of Silvaco

CEO Interview: Toshio Nakama of S2C

CTO Interview: Mohamed Kassem of efabless


DVCon San Jose February 27th – March 2nd

DVCon San Jose February 27th – March 2nd
by Bernard Murphy on 02-10-2017 at 7:00 am

DVCon is fast approaching, less than 3 weeks away. As a verification geek, this must be one of my favorite conferences, so I’ll be there; you’ll see me at tutorials, presentations and wandering around the Exhibit hall. (Pictures here from the 2016 DVCon – many of the same attendees will be at this year’s conference after all :cool:)

As usual, Monday is tutorial day, which I personally find very helpful to stay current with emerging/evolving standards in verification. The day kicks off with a session on creating portable stimulus models in the soon-to-be-finalized portable test and stimulus standard (PSS). Quite a few companies are already using this in various (pre-ratified) forms so I expect it to take off fast. The afternoon continues with a review of the next step in UVM (IEEE 1800.2) and impact this may have on existing verification environments. Finally, you can wrap up with a tutorial on SystemC design and verification – what’s new in the synthesizable subset definition, advice for high-performance modeling and an update on the emerging UVM-System-C standard, so you can reuse your System-C testbenches at RTL.

Tuesday is papers, posters and an intriguing lunch topic (Cadence-sponsored) on whether verification needs differ between edge nodes, hubs, networks and servers. Throughout, all the papers and posters look interesting. I’ll just mention a few that particularly caught my attention: Using UVM sequences to layer protocol verification (Microsoft), Emulation-based low-power validation (Samsung), trends in verification in 2016 (Harry Foster, Mentor), Assertion-based verification for AMS designs (poster, TI), Formal strategies for IP verification (poster, Microsoft), Regression efficiency with Jenkins (poster, Mentor), Optimizing random test using Machine Learning (ARM).

Wednesday starts with a can’t-miss session – users talk back on the portable stimulus standard. Given the audiences I usually see at DVCon, I expect to hear lively debate. Again, a few topics of special interest for me include: Early software development/verification using hybrid emulation/virtual prototyping (Samsung), Making formal mainstream (Intel), Machine Learning-based PVT/worst-case coverage in AMS (TI). The lunch is sponsored by Synopsys with fellow Atrenta alum Piyush Sancheti moderating a discussion on how industry leaders approach verification using Synopsys technology.

The post-lunch panel could be exciting, depending on how controversial the panelists wants to be, debating what SystemVerilog has done for us (or to us) and what might come after. In afternoon papers, I like: Ironic but effective, how formal can improve your simulation constraints (Mediatek), and Methods to improve verification reuse in AMBA-based designs (SK Hynix).

Thursday is back to tutorials, kicking off with Cadence talking about new approaches to reinventing SoC verification. Mentor have framed a tutorial on formal in an entertaining task – how to verify an FPGA-based solar-powered rescue drone using only formal, when you’re depending on that drone working to get out word that you need to be rescued. Synopsys follows with a very important tutorial on managing low power verification complexity, organized by another fellow Atrenta alum, Kiran Vittal. Low power design has made verification significantly more complex. How do you know you have covered all realistic possibilities, given a seemingly boundless range of configuration and switching options and how can you systematically approach power verification?

Mentor hosts a lunch on trends in verification with a view to an Enterprise Verification platform – should be interesting. Afternoon tutorials start with Cadence talking about IP verification and warning this is not a solved problem. They’ll discuss how to optimize coverage across the spectrum of verification techniques. Mentor follows with a tutorial on how to create a complex UVM testbench in a couple of hours. I’m curious to see how they do that. Synopsys closes with a tutorial on optimizing productivity with formal and getting to closure with formal (a perennially intriguing topic).

If you are involved in verification, DVCon is the one conference each year you cannot afford to miss. Signup HERE.

More articles by Bernard…


GlobalFoundries Makes Pure-Play Foundry Great Again!

GlobalFoundries Makes Pure-Play Foundry Great Again!
by Daniel Nenni on 02-09-2017 at 9:00 pm

The pure-play foundry business just got stronger and so did semiconductor manufacturing in the United States. As we all know, the fabless semiconductor industry started by utilizing extra capacity from traditional semiconductor manufacturers (IDMs). However, putting your designs in the hands of a competitor is not a good idea so the pure-play foundry business was born (1987) and has become more dominant every year, absolutely.


Today we still have a wide range of pure-play foundries but most of them have fallen behind and are still struggling with FinFETs (SMIC and UMC) or have stopped leading edge development all together (Powerchip, TowerJazz, Vanguard, Hua Hong, Dongbu, and X-Fab). As a result, the front door was left wide open for IDM foundries (Intel and Samsung) to bring leading edge technology to the insatiable fabless chip and fabless system companies.

That door is now closing with the GlobalFoundries acquisition of IBM semiconductor and the leading edge process development expertise that came with it. Further proof is the multi billion dollar expansion announcement GF made today (GLOBALFOUNDRIES Expands to Meet Worldwide Customer Demand).

GF’s name has come up quite frequently of late during conferences and customer visits, especially by the IP companies who are now developing IP for the GF 7nm process. Take a look at the customer quotes and let me confirm that I have heard significant GF chatter involving these companies and about a dozen more:

“GF has had a strong foundry relationship with Qualcomm Technologies for many years across a wide range of process nodes,” said Roawen Chen, senior vice president, QCT global operations, Qualcomm Technologies, Inc. “We are excited to see GF making these new investments in differentiated technology and expanding global capacity to support Qualcomm Technologies in delivering the next wave of innovation across a range of integrated circuits that support our business.”

“Collaborative foundry partnerships are critical for us to differentiate ourselves in the competitive market for mobile SoCs,” said Min Li, chief executive officer of Rockchip. “We are pleased to see GF bringing its innovative 22FDX technology to China and investing in the capacity necessary to support the country’s growing fabless semiconductor industry.”

“As our customers increasingly demand more from their mobile experiences, the need for a strong manufacturing partner is greater than ever,” said Joe Chen, co-chief operating officer of MediaTek. “We are thrilled to have a partner like GF that invests in the global capacity we need to deliver powerful and efficient mobile technologies for markets ranging from networking and connectivity to the Internet of Things.”


The expansion involves their facilities in New York (FinFET), Dresden (FD-SOI, Singapore (CMOS), and the new fab in China (CMOS and FD-SOI), meaning GlobalFoundries is truly a global pure-play foundry:

  • US advanced manufacturing, New York Fab 8, we are expanding 14nm FinFET capacity by 20% as well as developing advanced 7nm FinFET technology by 2018.
  • European manufacturing, Dresden Fab 1, expanding 22FDX*® capacity by 40% by 2020 as well as developing 12FDX™ technology with expected tape-outs in mid-2018
  • Asia Pacific manufacturing, Singapore 300mm and 200mm Fabs, expanding 40nm capacity by 35% at 300mm, 180nm capacity at 200mm as well as adding new capabilities to produce industry-leading RF-SOI technology.
  • China manufacturing, Chengdu Fab 11, a new 300mm fab in joint venture with the Chengdu municipality to support existing 180/130nm technologies, production starting in 2018 and then focus on manufacturing GF’s commercially available 22FDX process technology, with volume production expected to start in 2019.

And of course we can all thank Sanjay Ja, one of my favorite semiconductor CEOs, for making the pure-play foundry business great again:

“We continue to invest in capacity and technology to meet the needs of our worldwide customer base,” said GF CEO Sanjay Jha. “We are seeing strong demand for both our mainstream and advanced technologies, from our world-class RF-SOI platform for connected devices to our FD-SOI and FinFET roadmap at the leading edge. These new investments will allow us to expand our existing fabs while growing our presence in China through a partnership in Chengdu.”


Intel Alternative Facts!

Intel Alternative Facts!
by Robert Maire on 02-09-2017 at 12:00 pm

Brian Krzanich, CEO of Intel, announced a $7B investment in Fab 42 in Arizona in the oval office, next to Trump as evidence of a positive reaction to Trump’s new policies.

Alternative fact;
Paul Otellini, then Intel’s CEO, made a similar promise about Fab 42 in the company of Obama in 2011, during a visit to Hillsboro, Oregon.

BK said that it would bring 3,000 new Intel jobs to Arizona as the states largest private employer. BK further said that these were not jobs returning to the US from overseas but that Intel was all about “growth”

Alternative fact; If you add the 3,000 jobs that may be hired in the future for Fab 42, to the 12,000 or so that Intel reduced last year, Intel is still negative 9,000 jobs…AKA “negative growth”

There was no mention, in the oval office, of the H1B visa program in which Intel joined with a 100 other silicon valley companies, this past weekend, to sue the government over.

Alternative fact;
Intel asked the government for 14,523 H1B visas and green cards for foreign workers between 2010-2015 the years leading up to the 12,000 employee reduction of US workers following those additions.

Failed dinner consolation prize…

Perhaps the oval office photo op and announcement was done to make up for BK first setting up a dinner at his home for then candidate Donald Trump then being forced to cancel it due to the outcry in silicon valley. BK as one of the few Trump supporters in the valley will be expecting some payback in the form of tax and regulatory easements that were hinted at during the announcement today.

Fab 42’s long rumored resurrection…

After being put in “mothballs” several years ago it was only a matter of time before an appropriate use would be found at an appropriate time. With 10NM firmly in Israel it makes Fab 42 an easy choice for 7NM. When you add to the decision making process and the advent of EUV tools at 7NM or 5NM which require boatloads of electrical power along with gigantic, very expensive cranes to hoist the huge tools into the Fab, the only place that makes sense is Fab42 for 7NM. So the reality is that this was going to happen any way but Intel wanted to get some free political capital out of it

When is a dollar not a dollar? When its part of Intel’s CAPEX plan….

We have heard from many suppliers and tool makers in the semiconductor industry that are wondering who is getting all of the alleged spending on CAPEX from Intel. The numbers just don’t seem to add up. Intel’s announced CAPEX over the last couple of years does not seem to be proportionately translating into dollars spent at suppliers. Its almost as if $1 announced by Intel translates into 50 cents spent in real money. Its almost impossible to accurately measure this but our anecdotal evidence points to less spending than announced. Part of this may be “sandbagging” by management to make it easier for Intel to hit its financial targets but even still there’s a mismatch.

We have heard from a number of suppliers in the industry that Intel is not only no longer number one in spending, a title it lost long ago, but doesn’t even make it into the top 3 or 4 spenders anymore at many vendors.

Not all $7B goes to the US…

Given that the bricks and mortar are all done at Fab 42 and all that is needed is equipment move in we can assume that the $7B is all equipment. If we subtract spend on ASML, TEL, Hitachi and all other foreign vendors its likely that less than $5B actually “stays” in the US

Intel probably still spends more overseas than in the US on CAPEX…

If we look at Intel’s global footprint of fabs, especially the near term spend in Israel and China, the $7B spend in Fab 42 , especially when spread over several years it is in the minority. I am sure that BK could stand next to China’s Xi Jinping, in the equivalent of their oval office in China, for a similar photo op and claim a similar, if not larger amount of money that will be spent and jobs that will be created for memory production in China by Intel. We are also sure that Intel got some sweet political deals there as well…..

Intel the stock…
We view today’s announcement as not impactful either way for shareholders of the company, but we do applaud Intel’s ability to work both political sides of China versus U.S. and H1B versus foreigner bans. Intel hasn’t given up anything it would not have done anyway and in return it gets an IOU with Trump at a time when silicon valley is in open revolt against the new administration.

Maybe BK read “The art of the deal”…..


Notes from the Neural Edge

Notes from the Neural Edge
by Bernard Murphy on 02-09-2017 at 7:00 am

Cadence recently hosted a summit on embedded neural nets, the second in a series for them. This isn’t a Cadence pitch but it is noteworthy that Cadence is leading a discussion on a topic which is arguably the hottest in tech today, with this range and expertise of speakers (Stanford, Berkeley, ex-Baidu, Deepscale, Cadence and more), and drawing at times a standing room only crowd. It’s encouraging to see them take a place at the big table; I’m looking forward to seeing more of this.


This was an information-rich event so I can only offer a quick summary of highlights. If you want to dig deeper Cadence has said that they will post the slides within the next few weeks. The theme was around embedding neural nets in the edge – smartphones and IoT devices. I talked about this in an earlier blog. We can already do lots of clever recognition in the cloud and we do training in the cloud. But, as one speaker observed, inference needs to be on the edge to be widely useful; value is greatly diminished if you must go back to the cloud for each recognition task. So the big focus now is on embedded applications, particularly in vision, speech and natural language (I’ll mostly use vision applications as examples in the rest of the blog). Embedded application creates new challenges because it needs to be much lower power, it needs to run fast on limited resources and it must be much more accessible to a wide range of developers.

One common theme was need for greatly improved algorithms. To see why, understand that recent deep nets can have ~250 layers. In theory each node in each layer requires a multiply-accumulate (MAC) and the number of these required per layer may not be dramatically less than the number of pixels in an image. Which means you’ll need to process at billions of MACs per second in a naïve implementation. But great progress is being made. Several speakers talked about sparse matrix handling; many/most (trained) weights for real recognition are zero so all those operations can be skipped. And training downloads/update sizes can be massively reduced.

Then there’s operation accuracy. We tend to think that more is always better (floating point, 64 bit), especially in handling images, but apparently that has been massive overkill. Multiple speakers talked about weights as fixed-point numbers and most were getting down to 4-bit sizes. You might think this creates massive noise in recognition but it seems that incremental accuracy achieved above this level is negligible. This is supported empirically and to some extent theoretically. One speaker even successfully used ternary weights (-1, 0 and +1). These improvements further reduce power and increase performance.

Another observation was that general-purpose algorithms are often the wrong way to go. General-purpose may be easier in implementation, but some objectives can be much better optimized if tuned to an objective. A good example is image segmentation – localizing a lane on the road, or a pedestrian, or a nearby car. For (automotive) ADAS applications the goal is to find bounding boxes, not detailed information about an object, which can make recognition much more efficient. Incidentally, you might think optimizing power shouldn’t be a big deal in a car, but I learned at this summit that one current autonomous system fills the trunk of a BMW with electronics and must cool down after 2 hours of driving. So I guess it is a big deal.


What is the best platform for neural nets as measured by performance and power efficiency? It’s generally agreed that CPUs aren’t in the running, GPUs and FPGAs do better but are not as effective as DSPs designed for vision applications, DSPs tuned to vision and neural net applications do better still. And as always, engines custom-designed for NN applications outperform everything else. Some of these can get interesting. Kunle Olokotun, a professor at Stanford presented a tiled interleaving of memory processing units and pattern processing units as one approach, but of course custom hardware will need to show compelling advantages outside research programs. Closer to volume applications, Cadence showed several special capabilities they have added to their Vision P6 DSP, designed around minimizing power per MAC, minimizing data movement and optimizing MACs per second.

Another problem that got quite a bit of coverage was software development productivity and building a base of engineers skilled in this field. Google, Facebook and similar companies can afford armies of PhDs, but that’s not a workable solution for most solution providers. A lot of work is going into democratizing recognition intelligence through platforms and libraries like OpenCV, Vuforia and OpenVX. Stanford is working on OptiML to intelligently map from parallel patterns in a re-targetable way onto different underlying hardware platforms. As for building a pool of skilled graduates, that one seems to be solving itself. In the US at least, Machine Learning is apparently the fastest-growing unit in undergraduate CS programs.

Pixel explosion in image sensors

AI was stuck for a long time in cycles of disappointment where results never quite rose to expectations, but neural nets have decisively broken out of that trap, generally meeting or exceeding human performance. Among many examples, automated recognition is now detecting skin cancers with the same level of accuracy as dermatologists with 12 years training and lip-reading solutions (useful when giving commands in a noisy environment) are detecting sentences at better than 90% accuracy, compared to human lip-readers at ~55%. Perhaps most important, recognition is now going mainstream. Advanced ADAS features such as lane control and collision-avoidance already depend on scene segmentation. Meanwhile the number of image sensors already built surpasses the number of people in the world and is growing exponentially, implying that automated recognition of varying types must be growing at similar speeds. Neural net-based recognition seems to have entered a new and virtuous cycle, driving rapid advances of the kind listed here and rapid adoption in the market. Heady times for people in this field.

You can learn more about Cadence vision solutions HERE.

More articles by Bernard…


FPGA Design Gets Real

FPGA Design Gets Real
by Tom Simon on 02-08-2017 at 12:00 pm

FPGA’s have become an important part of system design. It’s a far cry from how FPGA’s started out – as glue logic between discrete logic devices in the early days of electronic design. Modern day FPGA’s are practically SOC’s in their own right. Frequently they come with embedded processor cores, sophisticated IO cells, DSP, video, audio and other types of specialized processing cores. All of this makes them suitable, even necessary, for building major portions of systems products.

Early FPGA’s were tiny compared with their contemporaneous ASIC’s. The tools for implementing designs were often cobbled together by the FPGA vendors themselves, or there were a few commercial offerings that offered advantages over the vendor specific solutions. Nevertheless, these tools always took a back seat in terms of capacity, performance and sophistication when compared to the synthesis, and place and route flows for custom ASIC’s.

In today’s market, the previous generation of FPGA tools would not stand a chance. Requirements and expectations are much higher. Good thing there has been significant innovation here. Actually, it’s astounding how far FPGA tools have come.

I was reading a white paper published by Synopsys that delves into the state of the art for FPGA design. It makes a good read. It is written by Joe Mallett, Product Marketing Manager at Synopsys for FPGA design. The title is “Shift Left Your FPGA Design for Faster Time to Market”.

Just as is true for ASIC’s, FPGA designs rely a lot on 3rd party IP. These come with RTL and constraint files. FPGA tools need to digest these easily. The same is true for application specific RTL and constraints developed on a per project basis. FPGA’s also have evolved complex clock structures – in large part to meet the power and performance requirements for the designs they are used on. Designers need tools that easily handle multicycle and false path definition.

Once the baseline design specification is in place, the design can move to First Hardware. While it is certain to not meet all the performance requirements of the final design, it is a significant milestone that leads to further optimization heading toward the final deliverable.

In the Synopsys White Paper, Joe talks at length about the ways their Synplify tool can accelerate each step in the FPGA design flow. The initial step being getting to First Hardware. Joe discusses initial design set up and mentions acceleration of synthesis through automatic partitioning and execution with mutli-core and/or compute farm compute resources. There are a lot of benefits in this approach, from lower per machine memory footprint to incremental design.

The next milestone is debug. Synopsys has developed extremely clever ways of adding high design visibility, which greatly accelerates debug. What’s even better, and is a unique advantage of FPGA’s is that the debug can run in real-time and at-speed, in the system level environment. The incremental design capabilities shorten turnaround time at this stage too.

Last we are left with tuning and optimization. Not surprisingly the means to accomplish this are at every stage of the design process. Highly optimized timing needs to be coupled with design size optimization. Synthesis, placement and routing all play a role in the final outcome. Rather than spoil the details, I recommend reading the white paper here. There is also companion paper on distributed synthesis in Synopsys’ Synplify FPGA design tool that I found very interesting.


Machine Learning for Dummies

Machine Learning for Dummies
by Kartik Hosanagar on 02-08-2017 at 7:00 am

I write a lot about data-driven algorithms, in particular those informed by Machine Learning. I thought it would be nice to give the low-down on machine learning for the uninitiated. Below, I discuss four essential questions. The answers are based, in part, from a recent discussion with Pedro Domingos, author of The Master Algorithm.

1. Should we care about artificial intelligence and machine learning?
Machine learning and AI touch your life every minute of every day, from applications you use at work to how you choose products to buy (Amazon recommendations). Even who you marry or date (Match.com, Tinder recommendations). A third of all marriages start on the internet and the matches are created by algorithms. As a citizen, consumer, and professional, you don’t need the the gory details of how machine learning works, but you do need the big picture.

2. What’s the distinction here between AI and Machine Learning?
AI means getting computers to do all the things that it takes human intelligence to do like reasoning, understanding language and the visual world, navigating, and manipulating objects. Machine learning is a sub-field of AI that deals with the ability to learn. Learning is the one thing that underlies all the others. If you had a robot that was as good as humans at everything, but couldn’t learn, five minutes later it would have fallen behind.

3. Let’s look at the history of AI – how did we get here, and what was the most important turning point?

If we rewind back to the early days of the field, one of the interesting aspects was that the field got named artificial intelligence. The runner up was complex information processing, which of course sounds incredibly boring. Calling it AI made it extremely ambitious which has been partly responsible for a lot of the progress. At the same time, it also created these very inflated expectations which were premature. Intelligence seems like an easier problem than it really is because we are intelligent, and we take it for granted. But evolution spent 500 Million years making us intelligent.

People believed that the way to really build intelligent systems was to have them learn. Initially both the understanding of the problem and the computing resources were not up to the test. Your brain is the best supercomputer on earth and people were trying to do this through the computers that they had back then. They ran a little bit ahead of themselves. In the eighties, there was a shift towards so-called knowledge systems. In these systems, you provide the system as much knowledge as possible. This allowed systems to do seemingly intelligent things like play chess and diagnose diseases. But the moment the system encountered a situation outside the knowledge base, the system failed. In the end, the systems were too brittle. They didn’t learn.

Then, what happened led to the present explosion of AI. People went back and said that learning is actually essential — we’re never going to be able to have intelligence without learning. Within learning, the most recent success is based on algorithms that emulate the brain on tasks like vision and speech recognition (techniques known as neural networks and deep learning).

4. What does a machine learning algorithm “look” like?
There isn’t just one algorithm. We have many algorithms and approaches today – based on statistical approaches (e.g. Bayesian learning), evolutionary techniques (genetic algorithms), logical induction (association rules) and approaches that mimic the brain (neural nets). No one algorithm is good at everything. In practice, one chooses from these. The question is whether there is a master algorithm that can learn an infinite variety of things. That’s a separate topic I’ll discuss in the future and that Pedro discusses in detail in his book.

Kartik Hosanagar is a Professor of Technology & Digital Business at The Wharton School.


Scott Jones ISS Talk – Moore’s Law Lives!

Scott Jones ISS Talk – Moore’s Law Lives!
by Scotten Jones on 02-07-2017 at 12:00 pm

I was invited to give a talk at this year’s ISS conference, the talk seemed to be very well received and I was asked to blog about it for SemiWiki. Parts of the talk will be familiar to SemiWiki readers from some of my previous blogs but I also went into more detail around some scaling challenges. The following is a summary of what I presented.
Continue reading “Scott Jones ISS Talk – Moore’s Law Lives!”


Aspirational Congruence

Aspirational Congruence
by Bernard Murphy on 02-07-2017 at 7:00 am

When talking to suppliers about their products, conversation tends to focus heavily on what they already have and why it is the answer to every imaginable need in their space. So it’s refreshing when a vendor wants to talk about where customers want to go without claiming they already have the answer wrapped up in a bow. I recently had such a conversation with Frank Schirrmeister of Cadence on the topic of congruence in hardware modeling through the ever-squeezed hardware and software development cycle.


The root of the problem is well-known – product teams are aggressively compressing development cycles to meet fast-moving market windows. One especially long pole in this effort is getting software ready as soon as possible after silicon. Virtual prototyping is a great way to accelerate development of higher-level software, but when you get closer to the hardware or you need to check performance, you have to turn to hardware modelling on emulation or prototyping systems.

There’s no question that simulation still plays a dominant role in hardware development and verification, but it is wildly impractical as a platform for developing/verifying software. Simulation also struggles with debug and regression in big, long and complex tests, particularly when trying to isolate root-cause problems over millions or billions of cycles. This is why emulation, running orders of magnitude faster than simulation, has become so popular. Emulation doesn’t compile as fast as simulation but at 140-150M gates compiling/hour, you can get 5-6 turns a day, making this approach a practical alternative even in relatively early stages of system development.

When it comes to software development and validation, emulation is arguably the best platform for bare-metal software (firmware and drivers for example) but is far too slow to practically support develop/debug cycles in middleware. That’s where you want to use FPGA prototypes which can run at least another order of magnitude faster and be able to develop/debug in the context of a larger system. This is closer to the expectations of software developers who want to work with what Frank calls “cup of coffee interactivity” – they’re OK to go for a cup of coffee wait for a run to complete, but they’re not OK with having to wait until the next day.

All of this is great, but how does it help software developers start earlier? FPGA prototypes are fast but bringup can take weeks, not an investment you want to make when the design micro-architecture is still being refined. So is early software development a Catch-22? Not if you have emulation/prototyping congruence. Imagine if, starting from an emulation build, prototype bringup could be reduced to a push-button step which would complete in limited time (perhaps a few days) and cycle-level correspondence could be guaranteed between emulation and prototype behavior, then software developers could start much earlier, even at mid-stage RTL. After all they don’t care about detailed hardware behavior (and especially not hardware debug), they just need enough to get started.

This goal is partly achieved today and partly aspirational. Congruence in modeling behavior is already available at cycle-level accuracy and RTL compile is already common between the two platforms. You want clock-tree transformation, gated clock handling and partitioning to the prototype to be transparent, which is largely in place today. But the back-end flow, into Vivado P&R for Protium, still requires some hands-on involvement which could be greatly reduced or even eliminated with maybe some compromise in modeling performance. For a software developer who doesn’t want to or need to know about FPGA tooling, this may completely be acceptable for early software validation/development. And reuse of previously compiled partitions/blocks with hands-free dependency management can further reduce compile times.

So to recap, the aspiration is to be able to start software development and validation while the RTL for the design is still in some level of flux. To do this it is essential to have congruence between emulation and prototyping, in setup and in modeling. Prototype setup should approach a push-button step given a functioning emulation and should compile as fast as possible (within as little as a day if feasible) to support software development staying nearly in sync with hardware evolution. And in modeling the models the hardware team and the software team are using should be congruent in behavior.

Some of this stuff is aspirational, some is already covered in Cadence prototyping (see here), however I’m sure Frank didn’t share this with me simply to admire the problem. I expect that Cadence is cooking more solutions to congruence as I write. For more info, watch for news in this area from Cadence.

On a final note, I chose the title of this piece partly because I thought it would make a great German compound-word, Aspirierendkongruenz as near as I can tell, based on Google translate (Frank can comment). Idle play of an idle mind.

More articles by Bernard…


CTO Interview: Peter Theunis of Methodics

CTO Interview: Peter Theunis of Methodics
by Daniel Nenni on 02-06-2017 at 7:00 am

Fascinated by computers at a very young age, Peter got his degree in Computer Science and was brought to the Bay Area via AIESEC Berkeley’s student exchange program to write his thesis. He has now more than 15 years of professional experience in software engineering, large scale systems architecture and data center engineering in Silicon Valley startups as well as with Yahoo! where he spent the last 9 years as a systems architect and principal engineer.

So how does a systems architect at Yahoo! find his way to an EDA IP lifecycle management company like Methodics?
After spending many years looking at system level issues across multiple industries, I realized that the semiconductor industry is on the verge of experiencing many of the challenges that I have seen before. Today’s SoC’s are in fact very complex systems with many of the challenges that other “systems” oriented industries have faced. Semiconductor companies need similar solutions as other industries to allow the complexity of a modern SoC to scale to the tools and methodologies that semiconductor design teams have at hand, but make the overall process much more efficient. Methodics had an excellent foundation to build solutions for the future of Semiconductor design that will scale with the challenges that they will face around complexity.

What type of solutions do you need to provide to design teams to allow them to realize these efficiencies?

As with any other complex system, it has always been a divide and conquer approach. As SoC’s became common place, many companies started to focus on “reuse”. The main goal was allowing portions of previous designs to be reused to reduce the amount of design effort needed in new designs. As reuse in design teams became commonplace, the need to reuse designs outside of the immediate team, as well as acquire pieces of the design from partners or third parties lead to the notion of IP and IP management. As SoC’s grew larger and more complex, the amount of data and metadata associated with IP’s grew astronomically, and better technology was needed to manage the lifecycle of IP and more complex digital asset management systems were needed.

I am familiar with IP within the SoC design process, but digital asset management is a new term, can you elaborate?

Digital assets are the natural progression to IP lifecycle management. Original design reuse strategies started just with design data. As methodologies formalized around design reuse, you began to see IP methodologies evolve with the formalization of techniques and methods at making designs reusable. Soon, verification IP was added to these flows and now you had to also map verification and design IP together, leading to a growth in information that needed to be managed as you added the meta data associated with these types of IP. Eventually design scripts were added to the mix, timing constraint files, DFT information, as well as other design artifacts leading to another explosion in data that needs to be managed. With this complexity, the relationships between all these different pieces of data also need to be managed, and now you quickly begin to lose the traditional notion of IP. Today, with requirements systems producing myriads of requirements for systems, issue and defect systems tracking all aspects of design, and the need to trace information throughout the entire process, there is much more that just traditional IP that needs to be managed. There’s a whole range of digital assets that need to be managed.

How is Methodics helping with digital asset management?
Methodics’ most recent solution provides a true digital asset management platform. It all starts with effective design data management. It has always been Methodics’ strategy to work with industry standards when they are available. For design data management, we build on Perforce which has been providing leading enterprise data management solutions to multiple industries for years. Then the platform allows the integration of other engineering systems to provide communication across all solutions and the design teams where none has been available. Given that there are many engineering systems in use for issue and defect tracking, requirements tracking and other management tasks, the Methodics platform allows the integration of systems like Jira and Bugzilla for issue and defect and Jama for requirements management, allowing it to work seamlessly in existing engineering environments. The goal of the platform is to link all design data, IP data and metadata, design artifacts, requirements and issue and defect data together into a cohesive information system, that allows designers to not only to search for and find IP, but also to find all possible data associated with that IP and how it is connected through the various engineering systems. Engineering teams can now be more productive by quickly searching for and finding IP, quickly being informed of issues with the IP being used and when to expect resolution to those issues, and the ability to track quality and grade IP being used. In addition, management can quickly compile reports of where IP is being used, outstanding issues with that IP, traceability of requirements and other pertinent design data. The platform provides efficiency in design by streamlining the management and communication of data from multiple disparate systems, and can scale with increasing complexity by tracking the convoluted web of interconnected information throughout designs and organizations.

What is unique to the Methodics technology that makes it able to provide solutions?
We recently re-architected our underlying object store to support a graph database. We made this decision because of the highly complex and hierarchical nature of having to manage IP and digital assets. Graph databases allow the storing of direct relationships of data, allowing it to be quickly retrieved, often with a single operation. Compare this to traditional relational databases where links between data are stored in the data itself. In order to retrieve complex hierarchical and interrelated data, you would need to make multiple expensive calls to extract the required information. Our graph database has made possible greater than 10x performance improvements in just handling hierarchical IP in SoC designs. It has also allowed us to create the digital asset management platform linking in other engineering systems that would not have been possible with a traditional relational database.

It seems that Methodics is investing in improving the efficiency of finding IP and critical information for SoC design, but designers still have to manage huge amounts of data itself in their workspaces. How is Methodics helping there?
You are correct. We are seeing user workspaces routinely exceed 10’s of GB, sometimes 100’s of GB. Regression runs, characterization runs, design and debug workspaces all put a great stress on Network Attached Storage (NAS), create NFS bottlenecks and cause significant project delays. This situation is only getting worse. Last year we introduced WarpStor, a Content Aware NAS optimizer and accelerator. It excels at dramatically reducing user workspace storage requirements and creation time. With WarpStor, our customers can see up to 90%+ reduction in storage requirements and corresponding reduction in network I/O for user workspaces, regression runs, characterization runs, and the like. Creation time for multiple large workspaces is also reduced from hours to seconds. We have been using this technology internally for sometime and now our customers are realizing these same results. We now have many reference customers for this technology that are willing to speak with companies that are interested in this technology.

Where do you see Methodics research and development efforts going from here?
Since the complexity of SoC’s are only going to increase, we will continue to leverage our knowledge of solving systems challenges and adapting that for the semiconductor industry. There is a wealth of knowledge and methodology that can be adapted to the semiconductor industry and we will continue to do that. Likewise, the work we are doing around IP and digital asset management as well as workspace management can help other industries, so we will continue to synthesize across industries to bring solutions to those other industries as well. Stay tuned….

www.methodics.com