NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

First Thoughts from #54DAC!

First Thoughts from #54DAC!
by Daniel Nenni on 06-24-2017 at 7:00 am

This was my 34[SUP]th[/SUP] DAC, yes 34. It is a shame blogging did not exist back then because I would have liked to have read thoughts from my eager young mind, or maybe not. The first thing that struck me this year is the great content. Before DAC I review the sessions I want to see and this year there were many more than I had time for. Thankfully most of them were recorded so I can go back and see the ones I missed because there were quite a few.

SemiWiki had five bloggers covering DAC this year so you will be seeing the resulting blogs for weeks to come. Next year we will have more bloggers because #55DAC is in San Francisco and I’m expecting even more great content.

The other thing that stuck out is the ages of the DAC attendees. There were many more under 35 than before and this tracks with the SemiWiki analytics. Currently (so far in 2017) the majority of our traffic is under 35 years old. That is a real positive sign, and the female readership is now in double digits and increasing steadily. I credit the DAC committee with attracting a younger crowd through University programs and such. The poster sessions this year were packed, the free food and drinks during the receptions probably helped attendance but that counts.

As for the total crowd, it seemed much lighter than I remember from last year and a lot less than San Francisco the year before but we will have to wait for the official word from DAC. There were quite a few first time exhibitors this year which is a great sign and you should expect even more next year in San Francisco. I took a look at the Solido meeting room schedule and was surprised to see both of their meeting rooms were booked up. Impressive! Other vendors I asked said the same. Pre-setting meetings is definitely the way to go.

The award for the best booth goes to OneSpin for sure. It was not the busiest booth but it definitely stood out. I would have to say that Cadence had the busiest booth as they usually do.


One of the trending topics at DAC and on SemiWiki is artificial intelligence and I would expect that to continue for years to come. On SemiWiki we track different application areas such as AI, Automotive, Mobile, IoT, and Security. We can then cross section that with geography, vendor, events, etc… The thing about AI is that it seems to touch all of the application areas and that is great news for semiconductors because AI will consume massive amounts of silicon for processing power and storage. The foundries will benefit because leading edge silicon will be in great demand and of course the memory makers will take their fair share of the profits.

Speaking of foundries, Synopsys was kind enough to host foundry events for TSMC, Samsung, GF, and Intel that were packed. TSMC, Samsung, and Intel were breakfasts and GF was dinner. Press was not allowed in the Intel breakfast but I attended the other three. As soon as the videos are posted I will publish my blogs because they are definitely worth viewing. We will do the same with the other bloggers and the recorded sessions they attended. I was told links would be posted in two weeks so stay tuned. Synopsys had a Press/Analyst table right up front and Aart de Geus sat with us both mornings. As I have said before, Aart is one of the most interesting people in EDA so sitting with Aart is a session in itself.

One of the more interesting discussions after the foundry sessions that I heard was: Why are the foundries releasing so many process versions? Some of the answers made me cringe. The easy answer is that customers are asking for them. In the case of TSMC that is a plausible answer because TSMC is very customer driven. You can call it collaboration but realistically TSMC builds capacity based on customer demand which is why TSMC has very high fab utilization rates. I also believe TSMC is using the quick node strategy to protect their customer base. For example, SMIC and UMC are shipping TSMC compatible 28nm. Now they will have to follow TSMC to 22nm. UMC and SMIC are also working on FinFET processes that will likely be “TSMC like”. Well, they had better be TSMC 12nm “like”.

DAC is a lot of work for everyone associated with it including exhibitors, presenters, panelists, committees, bloggers, etc… Please make sure your gratitude is well placed because without DAC I seriously doubt we would have the semiconductor audience we have today, absolutely.


Ransomware of $1 Million Crushes Company

Ransomware of $1 Million Crushes Company
by Matthew Rosenquist on 06-23-2017 at 12:00 pm

A South Korean web hosting company struggles for survival after agreeing to pay a ransomware extortion of $1 million to hackers.

New Record for Ransomware
Nayana, the South Korean web hosting firm, suffered a ransomware attack that resulted in 153 infected Linux servers. The resulting data that was encrypted by the malware impacted approximately 3500 small business clients. The ransomware targeted files, databases, and video. The compromise shuttered the hosting firm’s services.

The attackers demanded a colossal recovery fee of over $4 million dollars. Negotiations brought that figure down to $1 million, to be paid in several installments. This is a new record payout for ransomware victims. Sadly, it will fuel even greater motivations by cybercriminals to continue to press forward with more brazen attacks.

Failure to Manage Cyber Risks
Ransomware is a well-known problem and one that continues to grow in popularity with cyber-criminals. The malware that infected Nayana was a variant of the Erebus ransomware, specifically designed for Linux. Nayana was behind on proper updates and patching, running vulnerable systems using an outdated Linux kernel complied in 2008.

Once Erebus was able to gain a foothold, its sophisticated encryption methods began undermining the integrity of files and making them unusable by their owners. Erebus uses the RSA algorithm to encrypt unique AES keys that lock each file. Decryption is very difficult, likely impossible with current methods, without the RSA private keys held by the attackers to unlock the files. This variant of ransomware can target over 400 different file types, including Microsoft Office documents, databases, and multimedia files, but it is most adept at encrypting web server data.

Many organizations believe that Linux is more secure than Windows thus creating a false sense of security. Potential victims can be lulled into complacency with patches, updates, backups, monitoring, response planning, and security staffing. It is only when they discover their delicate house of cards crash down, does the thought of better security seem like a prudent idea.

In reality, Linux and Windows are not impervious to ransomware. Diligence and attention is required to maintain a proper security posture.

A Company Crushed
This may be the end of Nayana. The web hosting company at the mercy of the ransomware hackers. Since June 10th, the company has been struggling to find ways to resolve the issue and ultimately decided to negotiate with the attackers. In a posting to customers (http://www.nayana.com/bbs/set_view.php?b_name=notice&w_no=957) Nayana reported the incident and attempts to restore data. In a second post on June 14th (http://www.nayana.com/bbs/set_view.php?b_name=notice&w_no=961) the CEO discussed the frustration and challenges of the issue. He even posted communications to the hackers, stating he expects his business will not recover.

The first installments of the ransom have reportedly been paid. File decryption and validation has begun, but it remains to be seen if customers will stay with Nayana or leave for other service providers.

Who is Next?
Every company, reliant on digital services, must take cyber and ransomware risks seriously. This level of digital extortion is a new record for ransomware, resulting in a new victim destroyed. It raises the bar, but will soon become the norm.

The trend is unmistakable. Cyber criminals are becoming more technologically savvy and bolder in the targets and demands they make. Driven by greed, they are recognizing the huge potential heists available in the cyber landscape. Robbing banks, casinos, and armored cars at gunpoint seems antiquated and too risky compared to the safety and anonymity of the Internet. The new digital frontier holds much greater promise with far fewer challenges. Cyber-attacks will only get worse.

Those who protect themselves with vigor and professional security will rise above the pool of easy victims that criminals will target first. Every organization has a choice. Managing cyber risks is a real challenge, but one that should not be ignored. Investments in security must be commensurate with the value of what is being protected.

This incident must be a wake-up call and lesson to other companies. Those organizations who take cybersecurity for granted may be the next fatality.

Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.


Safety EDA

Safety EDA
by Bernard Murphy on 06-23-2017 at 7:00 am

It takes courage and perhaps even a little insanity to start a new EDA venture these days – unless you have a decently differentiated value proposition in a hot market. One company that caught my eye, Austemper, seems to measure up to these standards (though I can’t speak to the insanity part). They offer EDA tooling specifically around safety and span from safety analysis (FIT and fault metrics), through safety synthesis to safety verification.


Safety verification through fault injection is offered by bigger players but even here Austemper may have an angle to differentiate their offering. What intrigues me is that safety could quite likely evolve into a specialized in-house design service, like test or power, where experts may be open to end-to-end flows rather than a collection of in-house and vendor point tools. Which would play well to this kind of solution.

The company offers four tools in three functional areas, starting with SafetyScope, which computes the failures in time (FIT) and fault metrics for a design. The FIT calculation is based on a rather involved equation from the IEC 62380 model, where inputs can come from IP suppliers/other experts and can be augmented with user input. A safety plan can also be fed into this stage. Apparently, analysis can be “out of context” in which case it is essentially static or it can be “in context” in which case it can take usage data into account. The output of this stage is metrics across the design for FIT rate and diagnostic coverage required to get to target ASIL levels. This stage also generates fault-injection points to be used in the verification phase.

Safety hardening is handled by Annealer for big changes like duplication or triplication of blocks and Radioscope which does similar things at a finer-grained level (e.g. register banks). Here they replicate and inject logic to implement hardening. In Annealer selected logic can be automatically duplicated with comparison checks inserted to detect mismatches or selected logic can be triplicated along with majority voting. In Radioscope, similar automated replication occurs with parity checks for duplication and ECC for triplication. Radioscope can also add protocol checks to critical FSMs for legal states and legal transitions.

The final tool in the flow is Kaleidoscope which does fault simulation based on injected faults, as is generally required as a part of verification for safety-critical designs. Here they use their own fault simulator to simulate behavior for faults injected into the gate-level design but with wrinkles. First, they can take a VCD developed by any simulator as a starting point. It seems they also intelligently limit each fault simulation, in time and in design scope, to limit run-time. They can also run many injected faults in parallel to classify a large number of faults as masked or failed-state in a single run.

On customers, there’s the usual problem of not being able to reveal names, but Sanjay Pillay, the CEO, did tell me that they have been working for a year with a supplier to a tier-1 customer, who have now taped out. They are now working on their second customer. This sounds like an interesting company with some real-world (if not shareable) validation.

Austemper was founded in 2015 and is based in Austin. Sanjay previously led SoC development organizations at a variety of companies, including development for tier-1 companies. He also served as functional safety consultant in some of these roles. You can learn more about the company HERE.


Amazon eating Whole Foods is nothing as entire industries are about to become toast

Amazon eating Whole Foods is nothing as entire industries are about to become toast
by Vivek Wadhwa on 06-22-2017 at 12:00 pm

I doubt that Google and Microsoft ever worried about the prospect that a book retailer, Amazon, would come to lead one of their highest-growth markets: cloud services. And I doubt that Apple ever feared that Amazon’s Alexa would eat Apple’s Siri for lunch.

For that matter, the taxi industry couldn’t have imagined that a Silicon Valley startup would be its greatest threat, and AT&T and Verizon surely didn’t imagine that a social media company, Facebook could become a dominant player in mobile telecommunications.

But this is the new nature of disruption: disruptive competition comes out of nowhere. The incumbents aren’t ready for this and as a result, the vast majority of today’s leading companies will likely become what I call toast—in a decade or less.

Note the march of Amazon. First it was bookstores, publishing and distribution; then cleaning supplies, electronics and assorted home goods. Now Amazon is set to dominate all forms of retail as well as cloud services, electronic gadgetry and small-business lending. And its proposed acquisition of Whole Foods sees Amazon literally breaking the barriers between the digital and physical realms.

This is the type of disruption we will see in almost every industry over the next decade, as technologies advance and converge and turn the incumbents into toast. We have experienced the advances in our computing devices, with smartphones having greater computing power than yesterday’s supercomputers. Now, every technology with a computing base is advancing on an exponential curve — including sensors, artificial intelligence, robotics, synthetic biology and 3-D printing. And when technologies converge, they allow industries to encroach on one another.

Uber became a threat to the transportation industry by taking advantage of the advances in smartphones, GPS sensors, and networks. Airbnb did the same to hotels by using these advancing technologies to connect people with lodging. Netflix’s ability to use internet connectivity put Blockbuster out of business. Facebook’s WhatsApp and Microsoft’s Skype helped decimate the costs of texting and roaming, causing an estimated $386 billion loss to telecommunications companies from 2012 to 2018.

Similarly, having proven the viability of electric vehicles, Tesla is building batteries and solar technologies that could shake up the global energy industry.

Now tech companies are building sensor devices that monitor health. With artificial intelligence, these will be able to provide better analysis of medical data than doctors can. Apple’s ResearchKit is gathering so much clinical-trial data that it could eventually upend the pharmaceutical industry by correlating the effectiveness and side effects of the medications we take.

As well, Google, Facebook, SpaceX, and Oneweb are in a race to provide Wi-Fi internet access everywhere through drones, microsatellites and balloons. At first, they will use the telecom companies to provide their services; then they will turn them into toast. The motivation of the technology industry is, after all, to have everyone online all the time. Their business models are to monetize data rather than to charge cell, data, or access fees. They will also end up disrupting electronic entertainment — and every other industry that deals with information.

The problem for market leaders is that they aren’t ready for this disruption and are often in denial.The disruptions don’t happen within an industry, as business executives have been taught by gurus such as Clayton Christensen, author of management bible “The Innovator’s Dilemma”; rather, they come from where you would least expect them to. Christensen postulated that companies tend to ignore the markets most susceptible to disruptive innovations because these markets usually have very tight profit margins or are too small, leading competitors to start off by providing lower-end products and then scale them up, or to go for niches in a market that the incumbent is ignoring. But the competition no longer comes from the lower end of a market; it comes from other, completely different, industries.

Because they have succeeded in the past, companies believe that they can succeed in the future, that old business models can support new products. Large companies are usually organized into divisions and functional silos, each with its own product development, sales, marketing, customer support and finance functions. Each division acts from self-interest and focuses on its own success; within a fortress that protects its ideas, it has its own leadership and culture. And employees focus on the problems of their own divisions or departments — not on those of the company. Too often, the divisions of a company consider their competitors to be the company’s other divisions; they can’t envisage new industries or see the threat from other industries.

This is why the majority of today’s leading companies are likely to go the way of Blockbuster, Motorola, Sears and Kodak, which were at the top of their game until their markets were disrupted, sending them toward oblivion.

Companies now have to be on a war footing. They need to learn about technology advances and see themselves as a technology startup in Silicon Valley would: as a juicy target for disruption. They have to realize that the threat may arise in any industry, with any new technology. Companies need all hands on board — with all divisions working together employing bold new thinking to find ways to reinvent themselves and defend themselves from the onslaught of new competition.

The choice that leaders face is to disrupt themselves — or to be disrupted.

For more, read my book,Driver in the Driverless Car, and visit my website: www.wadhwa.com


Electronics upturn boosting semiconductors

Electronics upturn boosting semiconductors
by Bill Jewell on 06-21-2017 at 12:00 pm

Production of electronics has been accelerating in the last several months, contributing to strong growth in the semiconductor market. China, the largest producer of electronics, has seen three-month-average change versus a year ago (3/12 change) accelerate from below 10% for most of 2016 to 14.5% in April 2017. China’s April growth rate is the highest in over five years. United States electronics production 3/12 change has been over 5% since December 2016. U.S. electronics 3/12 change had not been over 5% in over ten years, since November 2006.


The European Union (EU) no longer releases monthly data on electronics production. EU total industrial production 3/12 change was 2.4% in April 2017. EU industrial production growth has been in the range of 1% to 3% since November 2013, following 20 months of 3/12 decline in 2012 and 2013. Japan electronics production has been volatile, with 3/12 change over the last seven years ranging from a 27% decline to 8% growth. March 2017 3/12 change was 1.6%, the first positive 3/12 change in the last 18 months. According to data from World Semiconductor Trade Statistics (WSTS), semiconductor market 3/12 change has accelerated dramatically over the last year from a 7% decline in May 2016 to 21% growth in April 2017. Accelerating electronics production has driven much of the semiconductor growth. Other driving factors are rising memory prices and inventory restocking.

Although China is the dominant Asian electronics producer, other counties play a significant role. Electronics production 3/12 change over the last year shows a mixed picture. Vietnam and India are significant emerging electronics producers. However, over the last year the 3/12 change for each country has decelerated from over 20% to about 2% in April 2017. Malaysia has shown steady growth in the 6% to 8% range. Thailand has bounced back from declines of 5% to 8% a year ago to double digit growth in each of the last two months.


South Korea and Taiwan have been historically significant electronics producers. South Korea has shown positive 3/12 change for the last 20 months after 12 months of decline in late 2014 to late 2015. Taiwan has been weak over the last couple of years, with 3/12 declines since April 2015. Taiwan has been the among the hardest hit countries with the shift of electronics manufacturing to China. In many cases, it has been Taiwan-based companies behind the shifts.

The table below shows average hourly manufacturing compensations costs (wages + benefits) by key countries. Location of electronics production is dependent on several factors, but compensation costs are a major consideration for labor intensive, low-skilled manufacturing. The U.S., Euro Area, Japan and South Korea are high cost areas with compensation costs over $20 per hour. Taiwan is trending toward high cost at about $10 per hour. China at $3.52 per hour is low cost compared to the above countries. However, Vietnam and India at about $1 per hour have costs less than one third of China’s. Thus India and Vietnam electronics manufacturing should grow faster than China over the next several years.


Trends in electronics production bear watching. A slowdown in the growth rate of electronics will lead to a downturn in the semiconductor market. A drop-off in electronics may also lead to falling memory prices and semiconductor inventory reductions – which could drive the semiconductor market negative. As we stated in our semiconductor forecast last month, this downturn could occur as early as 2019.


Can AI be Conscious?

Can AI be Conscious?
by Bernard Murphy on 06-21-2017 at 7:00 am

A little self-indulgence for the season, to lighten the relentless diet of DAC updates. I found a recent Wired article based on a TED talk on consciousness. The speaker drew a conclusion that consciousness was not something that could ever be captured in a machine and was a unique capability of living creatures (or at least humans). After reading an article on the the TED talk and watching a related talk, I’m not so sure but I am fairly convinced that whatever we might build in this direction may be quite different from our consciousness, will probably take a long time and will be plagued with problems.


The TED event (speaker Anil Seth, Professor of neuroscience at the University of Essex in the UK) is not posted yet, but there is a more detailed talk by the same speaker on the same topic, given recently at the Royal Institute, which I used as a reference.

First, my own observations (not drawn from the talk). AI today is task-based, in each case skilled at doing one thing. That thing might be impressive, like playing Go or Jeopardy, providing tax advice or detecting cancerous tissue in mammograms, but in each case what is offered is still skill in one task. A car-assembly robot can’t compose music and even Watson can’t assemble a car. Then why not put together lots of AI modules (and machinery) to perform lots of tasks or even meta-tasks? Doesn’t that eventually surpass human abilities?

I suspect that the whole of human ability might be greater than the sum of the parts. Most of us are probably familiar with task-based workers. If I am such a worker, you tell me what to do, I do it then wait to be told what task I should do next, as long as it is a task I can already do. Some other workers provide an obvious contrast. They figure out on their own what the next task should be, they develop an understanding of higher-level goals and they look for ways to improve/optimize to further their careers. This requires more than an accumulation of task or even meta-task skills. It requires adaptation towards goals of reward, a sense of accomplishment or a desire for self-betterment, which I’d assert requires (at a minimum) consciousness.

Which brings me to Anil Seth’s talk. He co-directs a center at the University of Essex for the scientific study of consciousness; the Royal Institute talk discusses some of their findings. To focus the research, he bounds the scope of study to accounting for various properties of consciousness, ducking obvious challenges in answering questions around the larger topic.

He narrows the scope further to what he thinks of as the first step in self-awareness which he calls bodily consciousness, which is awareness of what we see, feel and so on. His research shows a Bayesian prediction/reasoning aspect to this. Think of our visual awareness. We get input from our eyes, the visual cortex processes this, then our brain constructs a prediction of what we are seeing based on this and other input, and based on past experiences (hence the Bayes component) which is then compared again with sensory inputs and adapted. In his words, we create a fantasy which we adjust to best match between what we sense and prior experience; this we call reality. He calls this a controlled hallucination (hence the Matrix image in this piece).

This reality is not only based on what we sense outside ourselves; it is also based on what we sense inside our bodies. I see a bear and I sense the effects of adrenalin on my system, my heart runs faster, my hair (such as it is) stands on end and I feel the need to run (perhaps not wise). All of this goes into the Bayesian prediction which we continue to refine through internal and external sensing. I should add by the way that this is not mere philosophizing; all of this is derived from detailed experiment-based studies in the U.Essex consciousness group.

So just this basic level of consciousness, before we get to volition and sense of identity though our experiences and social interaction, is a very complex construct. It depends on sensory input from external sources certainly but it also depends on our biology which has evolved for fight or flight, attraction and other factors. So one takeaway is that reconstructing the same kind of consciousness without the same underlying biology would be difficult.

Anil Seth asserts that therefore to create consciousness without biology is impossible. That seems to me a bridge too far. What we are doing now in deep learning, in object recognition for example, transcends traditional machine behavior in not being based on traditional algorithms. And if we can reduce aspects of consciousness to mechanized explanations like Bayesian prediction, there is no obvious reason why we should not be able to do the same in a machine. We would have the same challenges probably in explaining the behavior of the machine, but not in creating the machine. This would be a non-biological consciousness (the machine could however introspect on its own internals), but not necessarily a lesser consciousness.


There’s an important downside. Just as the brain can have pathological behaviors in this controlled hallucination and those can have serious consequences not just for the owner of the brain but also for others, the same would be true for machines of this type. But understanding and control is potentially more difficult in the machine case because “reality” perceived by the machine may not align with our reality even in in non-pathological behavior. We may struggle to find reference points for normal behavior and struggle even more to understand and correct pathologies. Hence my view that trustable machine consciousness may take a while.

On that note, sleep well. What could possibly go wrong?


Accurate Power Sooner

Accurate Power Sooner
by Bernard Murphy on 06-20-2017 at 7:00 am

Synopsys PrimeTime PX, popularly known as PT-PX, is widely recognized as the gold standard for power signoff. Calculation is based on a final gate-level netlist reflecting final gate selections and either approximate interconnect parasitics or final parasitics based on the post-layout netlist. The only way to get more accurate power values is to measure the real thing on silicon after fabrication.


By nature, this kind of analysis starts very late in the design flow because you need a near-implementation or post-implementation netlist, and takes quite a long time to perform because you must run gate-level simulations to generate activity data, which can take days to weeks to generate. When signoff is a final confirmation that power is indeed in spec this is OK, but cycle times like this are definitely not OK if you find you missed the power budget. Short of planning for another spin, options until now were limited. You could go back to RTL to fix the microarchitecture, where you can use SpyGlass Power, a great tool for approximate estimation and optimization earlier in the design flow but implying an implementation restart which would delay tapeout significantly.

What you really need here is an intermediate solution between early RTL estimation and final PT-PX signoff accuracy, something that is still very accurate and based on gate-level netlists, but which you can get to much more quickly. This would enable earlier checks at near-signoff accuracy, allowing time for less disruptive corrective actions where needed. This is what Synopsys PowerReplay (a separate product) can offer, together with PT-PX. Synopsys launched this solution in May of this year; a webinar presented by Vaishnav Gorur (PMM) and Chun Chan (R&D director) provides details.


PowerReplay works together with PT-PX, which still does the power estimation based on the same pre- or post-layout netlist, together with SDF if available. What PowerReplay provides in this flow is the ability to short-circuit all the gate-level simulation setup and a good deal of the simulation run-time, while still generating the activity data you need. It does this by starting from an available RTL-based FSDB, from which it auto-maps the stimulus onto the gate-level netlist. The mapping is improved if the SVF file from synthesis is supplied as an additional input. This results in more accurate power numbers downstream.

You can also do activity analysis in PowerReplay to narrow down time windows you want to use in power estimation. While highest activity doesn’t necessarily imply highest power, high activity along with some knowledge of the design should help you localize best windows for worst-case power. In addition you can localize analysis to look only at certain blocks. And, as you might expect, you can run these analyses in parallel. PowerReplay runs simulation on the gate-level netlist using the stimulus from the RTL FSDB, restricting simulation to your selected time windows and design scope. Put this all together and you’ve gone from a long, grinding gate-level simulation and power estimation starting from time 0 to a much faster turn-time analysis requiring minimal setup and delivering almost the same accuracy.

Chun talked about a couple of customer case studies. In one case, the customer compared the PowerReplay flow with their existing signoff flow. They found that within the windows they selected for analysis, the PowerReplay flow results were with 2% of those for the reference flow. Also, where the reference flow took 7 days to complete, the PowerReplay-based analysis completed in 8 hours. In a second customer study, there was again a big reduction in run time thanks to the parallel analysis flow, and accuracy was within 2.5% of the reference flow. Across multiple customers Vaishnav said they have seen accuracy within 5% of PT-PX signoff numbers.

A couple of interesting questions came up in the Q&A. One was whether PowerReplay sims take gate delays into account. The answer is yes, as long as you supply SDF. Taking this into account is important for accurate peak power analysis which would otherwise be skewed. Another good question was how much earlier in the flow customers had been able to run these analyses. Vaishnav said that this flow can be run on blocks, so you don’t have to wait for the full chip, which means that you can start getting accurate block-estimates typically weeks to months ahead of full-chip analysis.

You can replay the webinar HERE.


Is AI the end of jobs?

Is AI the end of jobs?
by Vivek Wadhwa on 06-19-2017 at 12:00 pm

Artificial Intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it “touches every single one of our main projects, ranging from search to photos to ads … everything we do … it definitely surprised me, even though I was sitting right there.”

The long-promised AI, the stuff we’ve seen in science fiction, is coming and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from “The Jetsons” and R2-D2 of “Star Wars.”

This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion — and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But if we do, we will be the losers. As I discussed in my new book, “Driver in the Driverless Car,” technology is now advancing on an exponential curve and making science fiction a reality. We can’t stop it. All we can do is to understand it and use it to better ourselves — and humanity.

Rosie and R2-D2 may be on their way but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence — but would never be mistaken for a human. They can, however, do a better job on a very specific range of tasks than humans can. I couldn’t, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night.

Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentine’s Day, she might make a snarky comment but couldn’t venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldn’t help. That is where the human element comes in and where the opportunities are for us to benefit from AI — and stay employed.

In his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,” chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBM’s Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required … but so is courage.

Kasparov wrote: “When I sat across from Deep Blue twenty years ago I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer — or even playing chess.”


In other words, we better get used to it and ride the wave.

Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage.

AI is the next step in improving our cognitive functions and decision-making.

Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these don’t make us any dumber than encyclopedias, phone books and librarians did.

A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didn’t cause human chess players to become less capable — the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way.

As Kasparov explains: “It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. … What happens when the early influential coach is a computer? The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. … The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.”

Perhaps this is the greatest benefit that AI will bring — humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative.

For more, read my book,Driver in the Driverless Car, follow me on Twitter: @wadhwa, and visit my website: www.wadhwa.com


Design Deconstruction

Design Deconstruction
by Bernard Murphy on 06-19-2017 at 7:00 am

It is self-evident that large systems of any type would not be possible without hierarchical design. Decomposing a large system objective into subsystems, and subsystems of subsystems, has multiple benefits. Smaller subsystems can be more easily understood and better tested when built, robust 3[SUP]rd[/SUP] party alternatives may be available for some subsystems, large systems can be partitioned among multiple design teams and complete system implementation can (in principle) be reduced to assembly of finished or nearly finished subsystems.


But what makes for an optimal implementation doesn’t always align well with the partitioning that best served the purposes of logic design. Physical design teams have known this for a long time and have driven physical tool vendors to add many enhancements in support of:

· Adjusting logic partitioning to better balance sizes for physical units
· While also minimizing inter-block routing to reduce demand on top-level routing resources
· Reducing delays in long inter-block signal routes with block feedthrus
· Duplicating high-fanout ports or even logic to reduce congestion

These methods worked well and still do, to some extent, but they paper over a rather obvious problem. The burden of resolving mismatches between logic and physical structure falls entirely on the physical design team yet the line between logical and physical design is more blurred than it used to be, increasing the likelihood of iteration between these phases and therefore repeated effort and delay in re-discovering optimal implementation strategies on each iteration. In a climate of aggressive shift-left to minimize time to market and increasing cost-sensitivity disallowing any sub-optimal compromises, this approach to optimizing the logic/implementation divide is not moving in the right direction.

For those who don’t understand why logical and physical design have become so entangled, here’s a brief recap of a few examples. I’ve mentioned before the effects of low-power structure. Similar power islands may appear in widely separated parts of the logic hierarchy, yet there are obvious area and PG routing benefits to combining such logic into a single power island. But this restructuring can’t simply be moved to physical design, because changes like this must also be reflected in the RTL netlist and power intent for functional/power verification. Or think about MBIST insertion. It would be impossibly expensive to require one MBIST controller per memory in a design containing thousands of memories, so controllers are shared between memories. But the best sharing strategy depends heavily on the floorplan, and changing the strategy obviously affects the RTL netlist and DFT verification. Or think of a safety-critical design in which a better implementation suggests duplicating some logic. If that logic has been fault-injection tested, it’s not clear to me that it can simply be duplicated in implementation without being re-verified in fault-testing.


The obvious solution is to hand over more of this “coarse-grained” restructuring to logic design, leaving fine-grained tuning to the implementation team. This view has already gained traction in several design houses. The challenge though is that manually restructuring an RTL netlist can be very expensive in engineering resource and in time. Unfortunately, hierarchy in this case is not our friend. Moving blocks around a hierarchy looks easy in principle but maintaining all the right connections (rubber-banding connections) while not accidentally making incorrect connections (through naming collisions for example) is a lot harder, especially in modern SoC designs where some blocks you want to move may have hundreds or even thousands of connections.

Which makes this task a natural for automation. The objective is complex but mechanical, in restructuring (as one example) requiring large numbers of ports and nets to be added, changed or deleted, in a systematic way avoiding accidental wire-ORs. Intelligent decisions need to be made on whether fanins/fanouts should be consolidated inside a block or outside (there should be some user control over this) and there should be strategies for handling tieoffs and opens. And at the end of it all, the modified netlist should still be human-readable. You would also like to see some level of changes reflected in constraint files like UPF and SDC. Probably these still would need designer cleanup to accurately reflect modified intent, but they should be a good running start.


Sounds like magic? DeFacto offers these capabilities as a part of their STAR platform. In fact, they have been doing this in production for a while and cite some fairly compelling benchmark stats to support their claims. In one example a subsystem containing about 4K block instances, manual restructuring by a customer took 12 man-months followed by 3 man-months to verify/correct the changed design against the original. Using STAR, the same restructuring was completed in 1.5 hours (3.5 hours for bit-blasted nets) and verification was an error-free run through equivalence checking. This flow has also been used to restructure gate-level netlists up to 10M instances (65M gates).

There’s the usual problem getting customer testimonials but a couple of organizations stepped up. Socionext in Japan stated that they saved up to 3% of die area by manipulating one of their designs in gates using STAR. They added that if they had pushed harder, they felt they could have got up to 10% area saving, which is a pretty massive claim. Marvell didn’t share stats but they did say that they had built a cost-effective IP integration and design restructuring system for large SoC designs at RTL. I happen to know that Marvell have been working on solutions of this type for years, so it’s impressive that they finally settled on STAR.

I mentioned restructuring was a part of the STAR platform. More generally this platform can be used to build sub-system and SoC top-levels, to inject control fabrics (such as DFT or power management) on top of an existing netlist or to seamlessly update memory instances, for improved power or performance, though auto-generated wrappers. The platform supports a wide variety of design inputs – RTL of all flavors, IP-XACT, Excel, JSON (believe it or not) and more. It’s also scriptable through Tcl, Python and other languages. You can learn more from DeFacto’s webinar on restructuring HERE.