webinar banner2025 (1)

UBER car accident: Verifying more of the same versus the long-tail cases

UBER car accident: Verifying more of the same versus the long-tail cases
by Moshe Zalcberg on 05-21-2018 at 12:00 pm

The recent fatal accident involving an UBER autonomous car, was reportedly not caused – as initially assumed – by a failure of the many sensors on the car to recognize the cyclist. It was instead caused by a failure of the software to take the right decision in regard to that “object”. The system apparently considered it a false positive, as “something” not important enough to stop or slow down the vehicle, as if it was a newspaper page flown by the wind.

The (rather disturbing) dashcam footage released indeed shows the car system “not bothering” about the cyclist crossing the road.

After all, this was an odd scenario. The system was probably trained to recognize cyclist ontheir bikes, riding along the road and not across, as crossing should only happen on pedestrian lanes, and most probably during the day.

One must consider the fact that such Artificial Intelligence systems are trained by actual frames and footage recorded in previous rides, that log the most common cases.If it’s not pre-classified as something that deserves attention, the car might well just move on.

In the Cadence CDNLive EMEA conference that took place in Munich in early May, Prof. Philipp Slusallek of Saarland University and Intel Visual Computing Institute, highlighted the critical role of verification of such AI systems. The pre-recorded footage is good to test for the routine and trivial cases (e.g. cyclist riding along the road), he said, but not for a complete coverage of the long tail of “critical situation with low probability” that the system may not implement or may not be tested for.


(I apologize for the quality of the slides’ pictures. The presentation was in well-attended large hall.)

The solution he offered was for a High Performance Computing (HPC) system that analyses the existing stimuli data and generates additional frames for such unavailable cases, to achieve a “high variability of input data” – just as one will do in constraint driven randomization of inputs in the the verification of VLSI systems. Such a system, should take the “real” reality, as recorded from actual footage, as a basis and augment it with a Digital Reality of additional instances of scenarios – as described in the following slide:


However, such system displays multiple challenges and requirements, to analyze the existing scenarios and create additional valid scenarios:


In light of this extensive list of requirements, no wonder that the first part of the presentation focused on the HPC platform necessary to run such analysis and simulation. Since AI for Autonomous Driving (and possibly other use-cases) is supposed to be ubiquitous (and therefore cost-sensitive), I wonder if this heaving-lifting computing system will be a viable solution for the “masses”?

Furthermore, a verification engineer looking at some of the slides above, might be tempted to think constrained-random is the solution. A researcher, might see Monte-Carlo simulation in it, and others might see their domain specific solution. The real solution to the problem, would most likely be all of those and none of them, as the problem at hand definitely requires a new paradigm. Talking from a verification engineer’s stand-point, constrained-random, while good at generating extremely varied solutions, always requires a set of rules, i.e. the constraints. Its native field of application was for generating unexpected combinations within well-defined protocols. With the problem of autonomous driving, there are really no hard rules, as the Uber incident demonstrates quite well. Rather than starting from constraints, building a well-defined solution space and then trying to pick the most varied and interesting ones, this problem requires starting from real-life scenarios and then augmenting them with interesting variance that follows only soft rules. Instead of fighting the last war, verification engineers should probably start looking for inspiration, new technologies and new methodologies for generating stimuli elsewhere, maybe in the machine-learning domain? On the other hand, machine-learning algorithms are often a “black-box” – with these inputs, those are the outputs – not giving the system, or the person designing the system, enough insight on what and how can be improved.

In fact, Intel/Mobileye just announceda large deal to supply 8 million cars to a European automaker with its self-driving technologies, as soon as 2021 and released footage of autonomous driving in busy Jerusalem. And the debate about how many and what types of sensors, cameras, LIDARs and radars are needed for a full autonomous vehicle is still on.

However, as discussed above, no matter how many “eyes” such cars have, the true challenge will be to verify that the “brains” behind such eyes are making the right decisions at the critical moments.

(Disclaimer: This early post by Intel claims that the Mobileye system would have correctly detected and prevented the fatal accident).

Moshe Zalcberg is CEO of Veriest Solutions, a leading ASIC Design & Verification consultancy, with offices in Israel and Serbia.

*My thanks to Avidan Efody, HW/SW Verification expert, for reviewing and contributing to an earlier version of this article.


CEO Interview: YJ Su of Anaglobe

CEO Interview: YJ Su of Anaglobe
by Daniel Nenni on 05-21-2018 at 7:00 am

AnaGlobe Technology, Inc. is a leader in layout integration solutions that have been adopted by world-wide technology leading companies including the foundries, fabless, design services, packaging, panel, and IP companies. I know several of Anaglobe’s customers and am happy to work with them, absolutely.

The following is a Q&A discussion with YJ Su of AnaGlobe:

Please tell us about AnaGlobe?
AnaGlobe is a Taiwanese EDA company based in Hsinchu. We specialize in layout especially with custom layout creation and wafer-level chip-scale layout integration. We have been collaborating with several world-class semiconductor companies for more than 10 years, including customers in the US, Ireland, China, Korea, Japan, Singapore and Taiwan. We have distributors in the US, Europe, China, Korea, Japan and South-East Asia. AnaGlobe will exhibit at DAC 2018, booth #2340.

What makes AnaGlobe unique?
AnaGlobe offers a versatile IC layout framework and closely work with customers to embrace the fast-pace and timely response to the dynamic nature of top semiconductor foundries, design houses and packaging services providers. AnaGlobe has been working closely and responded rapidly to our customers’ demands, and expect to grow with customers’ in multi-wins collaborations.

AnaGlobe also sponsors some talented EDA researchers and projects in Taiwan. We though start with a small-scale company, but also do some state-of-the-art topics and have participated leading-edge technologies such as pattern matching, machine learning, etc. in IC layout domains.

What keepslayout integration engineers up at night?
Today’s semiconductor industry faces the dynamic dilemma among both economic and technology factors, e.g. time-to-market, ROI estimation (return-on-investment), with various process nodes, SOC or SIP path-finding and diversity of the end products such as applications in IoT, automotive, mobile and high performance computing and even heterogeneous components integration.

One common challenge to layout integration team is facing the dynamic nature of complicated design intents, revisions and huge data size. For example, a top-level layout assembly task normally manages hundreds of sub-blocks in either a SOC GPU chip, or an advanced-node testchip design or a multi-chip SIP project, while each sub-block owner may have many design re-spins. These require high performance layout integration platform, though not necessarily as-is expensive design implementation or signoff EDA options. As we’ve been engaging these with top-tier fabless, foundries and packaging houses, AnaGlobe commit this topic is good fit to our software strengths and decent technical customization supports, for multi-wins scenarios.

How can AnaGlobe products help?
We have two main products: GOLF aims at full custom layout creation with high flexibility; THUNDER eyes on wider diversities with great performance (e.g. terabytes data capability), from IPs to wafer-level chip-scale layout assembly integration. Furthermore, with flow automation and CAD features combination, we can build comprehensive database handling solutions in tape out flows, chip-packaging integration and path-finding, and even inline manufacturing image-to-cad inspection analysis domains.

GOLF:
For custom layout creation, we offer three levels of functionality:

[LIST=1]

  • Being a SI2 member, our tools are built-in OA (OpenAccess) database compatible, with proprietary data structure of GDS/OASIS/DXF/EDIF import/export. GOLF provides layout and schematic viewing, layout editing, hierarchical editing, query, undo/redo, schematic-driven layout (SDL) and also interface to major verification tools (Calibre and ICV).
  • Instead of tedious programming language to create PCell layout, GOLF (Geometric Objects Layout Formula) offers flexible and highly productive device-level layout creation and reusable hierarchical layout generator on OpenAccess, by both programming support (API for TCL, Python and Perl) and a GUI-based PCell Designer. It is an intuitive IDE (integrated development environment) for PCell creation, preview, testing, debug, and documentation on layout directly. Customers also adapt PCell Designer in the creation of manufacturing test key layout, flat panel display layout and 3D packaging layout, etc.
  • GOLF is also incorporated with several constraint-driven custom placers and routers, for specific application of examples, characterization test chip layout generator in advanced process nodes, all-angle router for free-form panel display, constraint-driven analog layout, and so on.THUNDER:
    For wafer-level chip-scale layout integration, our goal is to support the layout database from post P&R, IP merge, verification (XOR LVL, connectivity, etc.), debugging, defect inspection, failure analysis to chip-package integration. Comparing to normal OA file size handling capability, THUNDER has a proprietary database, called ThunderDB, and is capable to handle huge layout data with extreme performance of up to 600+GB GDS per minute. Users can then perform big data analysis for further processing (e.g. 3D-view, cross-section, density map, wafer map), machine learning based optimization, and read/write for GDS, OASIS, LEF/DEF, MEBES and OpenAccess.Can you provide some real world (customer based) examples?
    Generally, our customers include some of the top 10 ranking companies of IC foundries, OSATs, IC design houses, optoelectronics companies and even semiconductor equipment vendors. GOLF has been used in the layout creation of test structures for advanced process technology nodes, free-form panel displays, 3D packaging and analog designs. THUNDER has been tailored into a variety of applications including an in-house collaboration platform, a 2.5D/3D packaging layout integration flow, sign-off tape out flows and the layout data preparation front-end of e-beam equipment.

    For example, the IP merge and the XOR LVL functions of THUNDER have been adopted by several customers in their sign-off tape out flows and obtained 10x performance gain. Some of our customers use THUNDER to handle multiple data sources (e.g. layout data, DRC results, pictures from SEM, in hundreds of GB scale) to analyze the data sanity.

    Which markets do you feel offer the best opportunities for AnaGlobeproducts the next few years and why?
    We may recap our versatile layout platform positions good fit for both cell bottom-up design flows (mainly with GOLF) and system top-down design flows (mainly with THUNDER). In addition to GOLF user-friendly PCell creation, schematic and/or constraint-driven layout editing, THUNDER’s capability in huge design handling, efficient database structure, flexible flows automation and interface to major verification EDA tools also ensure the seamless design flows to confront the dynamic design efforts in IoT, automotive, mobile and high performance computing applications. AnaGlobe keeps spending tremendous efforts to develop advanced layout functions in the very near future, and commits to facilitating the whole design solutions.

    http://www.anaglobe.com/

Also Read:

CEO Interview: Ramy Iskander of Intento Design

CEO Interview: Rene Donkers of Fractal Technologies

CTO Interview: Ty Garibay of ArterisIP


Chip Equipment where to from here?

Chip Equipment where to from here?
by Robert Maire on 05-20-2018 at 12:00 pm

We may know the top, do we know the bottom? What is the downside in NAND, DRAM, Foundry. Can China help or is risk worse than upside?

It would appear that our concerns in our preview piece prior to the AMAT call came true as the stock now has a “4” handle, NAND is in question and display is down.

However its not like business is falling off a cliff any time soon. The industry is clearly not like the bad old days when business fell off by 50% in a quarter, the industry is less volatile. Aside from the industry being more mature, so are the stocks, with dividends and significant buybacks.

Companies have enough excess cash to prop up EPS in a dropping market by buying back shares much as we saw in Applied’s just announced quarter. It may not seem like a lot but the combination of dividends and buy backs will cushion downturns at least from a stock perspective.

Customers are more rational. Rather than building new capacity in fab size chunks they have been modulating their capacity in smaller bites. We will get some exceptions such as display where Samsung came to an abrupt halt but that is an exception more than a rule and we also saw a huge uptick in OLED leading up to that abrupt stop.

However stocks still are volatile and react as we have seen in this 10% drop in AMAT. While we remain concerned about NAND spend, display, and slow foundry we are still intrigued by the upside in China. We saw over a $1B in sales by Applied.

Our problem is that the China upside brings with it huge political risk. Just as we thought the risk was going away, politicians started up new legislation aimed squarely at China and tech. We think that the upside in China could mitigate softness in NAND, foundry & display but the downside beta is huuuge as That $1B could go to zero inside of a quarter by the snap of a politicians fingers (see our preview in our recent April fools newsletter…)

In short the industry and stocks are at an interesting crossroad with conflicting currents yet to be sorted out.

NAND – We still like it but its getting old
The SSD revolution has been great. Iphones with 256GB of NAND are also great. The industry has been careful not to overbuild but we are sure that China wants to shoehorn its way into the NAND market and the way to do it is by price. Although we are way away from China being a force in NAND the existing supply/demand balance may be softening. The softening has not come from the supply side as it has been rational, the softening has come from the demand side which has not kept up. Capacity will not come off line as we finish up 2D to 3D conversions so we really need demand to pick back up.

Foundry
TSMC, the world’s biggest foundry, made it clear on their call that demand was softening. They have been very good spenders going to 10NM. However we would point out that there is significant equipment reuse between 10NM and 7NM whereas there was not a lot of reuse going from 14NM to 10NM. This reuse issue , coupled with soft smart phone demand makes for weaker spend, likely for a year or more. We could see spend on EUV as the industry tries to migrate but less so in other areas.

Yield management still good
We still think the difficult EUV conversion coupled with new inexperienced players in China that need to figure out process bodes better for yield management and with it KLAC & NANO etc;. Indeed KLA has been a slight bit more positive in outlook than AMAT or LRCX and the stock has also outperformed.

Subsuppliers – MKSI, AEIS, ICHR, UCTT etc…
As expected these companies are off in sympathy to their customers as well they should be. However, we would point out that they are more diversified and somewhat less levered to their customers fortunes and misfortunes as the case may be. Their performance has been very good and the stocks have held up much better than in previous cycles when they were at the end of the whip or bottom of the hill as things flowed downhill. They are much more resilient now.

The Stocks
In terms of the stocks, we might get interested again in Applied in the mid $40’s. We could potentially see another round trip to the mid $50’s but we think it will be harder to crack $60 given the headwinds now present. We doubt that news will improve after the current quarter is reported and we have the China sword returned to a position above our heads. Analysts who were bullish going into the quarter have lost some credibility and the dreaded “C” word (cyclicality) is being used again.

We still like KLAC as the copy least impacted by most issues (perhaps with the exception of China). ASML will likely see improving EUV business but that may not boost earnings as margins remain poor versus DUV so we are not intrigued by that play.

We still like Micron for being dirt cheap and see continued strong profits.


AMAT has OK Q2 but Q3 flat to down

AMAT has OK Q2 but Q3 flat to down
by Robert Maire on 05-20-2018 at 7:00 am

“Puts & Takes” “Reduced NAND Expectations” 2019 to be down from 2018. Applied Materials reported a good quarter coming in at $1.22 EPS and $4.567B in revenues versus street of $1.14 and $4.45B.

However if we back out the buy back of 4% it would have been around $1.17 so a slight beat. Guidance was for EPS of $1.17 +-4 cents versus street of $1.16 but revenues of $4.43B +- $100M versus street of $4.53B.

So basically we had an in line quarter with a down guide.

On the call management said there were “Puts and Takes” in customer orders (which is code for cancellations). Management pointed out weak smartphone sales as the reason for the “re-adjustment” of business from customers.

Also on the call management said their expectations for NAND business in 2018 was “reduced” from prior expectations. This is no surprise as NAND has been feeling a bit “toppy”.

Perhaps the most interesting comment was management’s view that 2019 will be a down year. If we try to read into the numbers it sounds like a roughly 10% drop in 2019 versus 2018.

This seems to imply that we are at or near a peak in AMAT’s business. This may not be a sharp peak and feels a bit more like a plateau that will slowly fall off as management made clear that customer spend has been more rational.

Finally , as expected OLED display business was off while LCD TV business was OK.

Did AMAT just call a “market top”?

It kinda feels like it…..
With a projected down 2019 and near term “puts and takes” coupled with reduced NAND expectations it sure sounds like we are at or near a peak for the year or maybe for this entire “supercycle”. Add to that the OLED issues in display which will last through at least 2018 and we are very hard pressed to see the upside.

“Puts and Takes”
Puts and takes is usually code for pull ins and push outs of orders but usually only gets said when the push outs exceed the pull ins. Given that the comment was associated with smart phones we can only deduce that TSMC was backing off spending as they mentioned the same thing on their call. This is obviously the first sign of a deteriorating market in foundry.

“NAND expectations for 2018 reduced”
NAND has obviously been on fire for well more than a year and along with DRAM has been the vast majority of semiconductor tool spending. Apple’s comments said they expect memory pricing to moderate later in the year and this comment may reflect that.

Make no mistake, NAND spending is still huge but it will likely be less huge in the future and perhaps the rate of growth will slow or get negative.

Display will be down in 2019 – OLED off

As we had suggested in our preview note, OLED spending has slowed which reflects the comments from Samsung. Next gen LCD TV spend is still OK also as expected. We don’t see a bounce back in OLED any time soon and it sounds like AMAT expects that weakness to continue. Management expects 2019 display to be down.

The stocks
Given the cautious comments made by management on the conference call coupled with the flattish guide and 2019 down guide its clear that the stock trades off tomorrow. Management sounded overly defensive during the Q&A session. The stock is already off 7% as we write this and our preview piece suggested we could get back to a $4 handle , down 10%, and that’s what it feels like right now after listening to the call.

Bulls will try to defend it but the company made some key negative comments that are going to be hard to overcome.

Obviously this will be negative for the overall group but we did already get a similar flat/down from LRCX so we don’t see a major impact on that stock, but it will be down. KLAC remains the outperformer of the group but will likely be off a bit in sympathy as well.

It was a nice cycle while it lasted…….


SPICE Model Generation by Machine Learning

SPICE Model Generation by Machine Learning
by admin on 05-18-2018 at 12:00 pm

It was 1988 when I got into SPICE (Simulation Program with Integrated Circuit Emphasis)while I was characterizing a 1.5 μm Standard cell library developed by students at my Alma-Mata Furtwangen University in Germany. My professor Dr. Nielinger was not only my advisor he also wrote the first SPICE bible in German language. At that time SPICE simulation was already established as the “golden” Simulator for circuit design for over a decade – and remains so to this day.
Continue reading “SPICE Model Generation by Machine Learning”


ZTE Caving shows China Trade Tirade is Hollow

ZTE Caving shows China Trade Tirade is Hollow
by Robert Maire on 05-18-2018 at 7:00 am

We have been watching the ZTE saga play out on the public stage as we think it is an extremely important leading example of how the administration will truly act. As we all know, actions speak louder than words, and in the case of ZTE our words said one thing and our actions said something else. We need to analyze what the actions really mean about trade issues that impact technology and China as ZTE was perhaps both the first real test as well as the poster child for China trade issues.

ZTE impacts both companies selling semiconductor components to ZTE and obviously ZTE itself. However, the true impact is much broader as it is an indicator of other semiconductor companies who sell into China, semiconductor equipment companies who sell into China as well as IP issues and many other far reaching issues.

There has been a cloud hanging over a large swath of semis as China sales are the fastest growing area of business for most companies and represents much, if not all of the future upside.

From a stock perspective, we have been very concerned about the downside risk to US tech companies if the US got into a real trade war with China. We were only partially kidding about our April fools note which jokingly announced a halt of US semiconductor equipment sales to China. We think it could happen but now the likelihood has been greatly reduced following the ZTE surrender.

Additionally, the Washington Post has printed a list of “demands” from China regarding trade. We now have a yardstick to judge the administration by as we can see which of the demands we have caved in on.

ZTE = Zhilaohu (paper tiger)

The term “zhilaohu” or “paper tiger” was coined by Mao Zedong to describe the US as being all bark and no bite. It would seem that we could resurrect that term to describe the current situation with trade as it applies to ZTE.

Hollywood screenwriters could not have dreamed up a more perfect nemesis for the current administration than ZTE.

  • More jobs in China versus US – check
  • Supporting Iran by providing equipment – check
  • Supporting North Korea by providing equipment – check
  • Suspected of espionage in equipment – check
  • Poster child for trade dispute – check

ZTE checks all the boxes and the administration could claim victory on so many fronts so it seemed like a slam dunk until we caved. This is why everyone is spinning.

The press has drawn a dotted line to a Trump company deal in Indonesia that will feature a Trump branded hotel, residences and golf course, being built with $500M from the Chinese government.

Some have suggested the US caved over concerns of agriculture exports to China being at risk.

Whatever the real story is, its confusing. The official US stance of saving Chinese jobs just doesn’t seem to hang together.

The Chinese “Demand List”

The demand list from China as published by the Washington Post;

  • The United States commits to eliminating the sanctions imposed after China’s crackdown on protesters in Tiananmen Square in 1989.
  • The United States relaxes export restrictions on technology such as integrated circuits- Read this as China can buy all the US technology it wants, good for US chip equipment makers bad for competing chip makers who could get trashed like solar and LED before them.
  • The United States allows U.S. government agencies to purchase and use Chinese information technology products and services- Goes against concerns of trojan horse firmware in Chinese equipment- routers and mobile phones.
  • The United States agrees to treat Chinese investment and investors equally to those from other countries and place no restrictions on Chinese investment –Would allow Chinese purchase of Lattice, Xcerra or Micron or many other US tech companies.
  • The United States agrees to ensure Chinese businesses can participate in U.S. infrastructure projects Allows suspect equipment and companies to be used in critical US infrastructure.
  • The United States agrees to strengthen protection of Chinese intellectual property. Means companies like AMEC will win over companies like Veeco. US IP protection would be zero.
  • The United States agrees to drop its anti-dumping cases against China at the World Trade Organization. China would be allowed to dump in the memory chip business just as they do in solar and LED.
  • The United States agrees to terminate its investigations into Chinese intellectual property theft and not impose any of the sanctions Trump already announced. China is free to rip off any US IP.

By any standards this is a pretty ugly list. We now have a very public yardstick to measure future US trade deals and can grade the ZTE reversal as an “F”

Removes the sword of Damocles
We have been concerned about the risk of a major event in the trade tirade with China. We have been concerned of halting chip sales or equipment sales or other things that could trash US tech stocks.

Given what happened with ZTE it is very clear that the probability of getting into a real trade war with China is near zero and all we will do is likely to be lip service.

People can point to CFIUS blocking several deals but the reality is that the biggest blocked deal was Broadcom buying Qualcomm and it was not directly against a Chinese company. The Lattice deal was blocked but probably would have been blocked for any foreign buyer.

The administration seems to be against most any large deal not just foreign companies buying US companies.

Maybe next years April fools article will be giving advanced chip tools & designs away free to China……

The stocks
In our view while there is no specific upside to be had in US stocks other than those directly involved with ZTE which have already popped.

In our view, risk to US semiconductor and equipment sales has been greatly reduced, perhaps not to zero, but to levels similar to what we had prior to the trade tirade.

The longer term threat to IP and technology dominance still remains and could get worse depending upon how the US responds to China’s demand list.

For now, at least another variable has been removed or reduced from an already volatile tech sector.

This could obviously change at any moment and we could see yet another 180 degree reversal from the administration so we wouldn’t get too comfortable.

Perhaps the administration is listening to cooler heads in the tech industry like Tim Cook , but we doubt it…….

Read more from Semiconductor advisors


Retooling Implementation for Hot Applications

Retooling Implementation for Hot Applications
by Bernard Murphy on 05-17-2018 at 7:00 am

It might seem I am straying from my normal beat in talking about implementation; after all, I normally write on systems, applications and front-end design. But while I’m not an expert in implementation, I was curious to understand how the trending applications of today (automotive, AI, 5G, IoT, etc.) create new demands on implementation, over and above the primarily application-independent challenges associated with advancing semiconductor processes. So, with apologies in advance to the deep implementation and process experts, I’m going to skip those (very important) topics and talk instead about my application perspectives.


An obvious area to consider is low power design. Automotive, mobile (phone and AR/VR) and IoT applications obviously depend on low power (even in a car, all the electronics we are adding can quickly drain the battery). AI is also becoming very power-critical, especially as it increasingly moves to the edge for FaceID, voice recognition and similar features. 5G on the edge, in enhanced mobile broadband applications for example, must be very carefully power managed thanks to heavy MIMO support and the consequent parallelism required to support high throughput rates.

Power is a special challenge in design because it touches virtually all aspects, from architecture through verification and synthesis, then through to PG netlist. Certainly you need uniformity in specifying power intent. The UPF standard helps with this but we all know that different tools have slightly different ways of interpreting standards. Mix-and-match flows will struggle with varying interpretations, to the point that design convergence can become challenging. The same could even be true within a flow built on a single vendor’s tools unless they pay special attention to uniformity of interpretation. So this is one key requirement in the implementation flow.

Another big requirement, associated certainly with automotive but also long-life, low-support IoT, is reliability. We demand very low failure rates and very long lifetimes (compared to consumer electronics) in this area. Implementation must take into consideration on-chip variation (OCV) in timing. I’ve written before about the impact of local power integrity variations on local timing. Equally, power-inrush associated with power switching and unexpectedly high current demand in certain use modes increases the risk of damaging electromigration (EM). Traditional global margin approaches to manage this variability are already painfully expensive in area overhead. Better approaches are needed here.

Aging is a (relatively) new concern in mass market electronics. One major root-cause is negative-bias temperature instability (NBTI) which occurs when (stable) electric fields are applied for a long time across a dielectric (for example, when a part of a circuit is idle, or a clock is gated for long periods). This causes voltage thresholds to increase over time which can push near-critical paths to become critical. Again, it would be overkill (and too expensive) to simply margin this problem away so you have analyze for risk areas based in some manner on typical use cases.

Thermal concerns are another factor and here I’ll illustrate with an AI example. As we chase the power-performance curve, advanced architectures for deep neural nets are moving to arrays of specialized processors with need for very tightly coupled caching and faster access to main memory, leading to a lot of concentrated activity. Thanks to FinFET self-heating and Joule heating in narrower interconnects this raises EM and timing concerns which must be mitigated in some manner.

Still on AI, there’s an increasing move to varying word-widths through neural nets. Datapath layout engines will need to accommodate this efficiently. Meanwhile, the front-end of 5G, for enhanced mobile broadband (eMBB) and even more for mmWave must be a blizzard of high performance and highly parallel activity, in order to sustain bit-rates of 10Gbps. For eMBB at least (I’m not sure about mmWave), this is managed through a multi-input, multi-output (MIMO) interface through multiple radios to basestations, therefore multiple parallel paths into and out of the modem. In addition, there is support for highly parallel processing from each radio into one or more DSPs to implement beamforming to identify the strongest signal. Getting to these data-rates requires very tight timing management in implementation, also very tight power management.

So yeah, I would expect implementation for these applications to have to advance beyond traditional flows. Synopsys has developed their Fusion Technology (see the opening graphic) as an answer to this need, tightly tying together all aspects of implementation: synthesis, test, P&R, ECO and signoff. The premise is that all these demands require much tighter correlation and integration through the flow than can be accomplished with a mix-and-match approach. Fusion Technology brings together all of the Synopsys’ implementation, optimization and signoff tools and purportedly has demonstrated promising early results.

If you’re wondering about power integrity/reliability, Synopsys and ANSYS have an announced partnership, delivering RedHawk Analysis Fusion, an unsurprising name in this context. So they have that part covered too.

I picked out here a few topics that make sense to me. To get the full story on the Fusion technology, check out the Synopsys white-paper HERE.


5 Ways to Gain an Advantage over Cyber Attackers

5 Ways to Gain an Advantage over Cyber Attackers
by Matthew Rosenquist on 05-16-2018 at 12:00 pm

Asymmetric attacks, like those in cybersecurity, benefit the aggressor by maintaining the ‘combat initiative’. That is, they determine who is targeted, how, when, and where attacks occur. Defenders are largely relegated to preparing and responding to the attacker’s tempo and actions. This is a huge advantage.

Awakening Business
The business world is beginning to understand some of the inherent aspects they face. Warren Buffett, the Oracle of Omaha and CEO of Berkshire Hathaway, has stated cyber attacks are the “number one problem with mankind” and explained how attackers are always ahead of defenders and that will continue to be the case.

Not all is lost of course, but we must be cognizant of the attacker’s strengths in order to form a winning defensive strategy. Cybersecurity is not just patching or an exercise in engineering, rather it is a much larger campaign that involves highly motivated, skilled, and resourced adversaries on a battlefield that includes both technology and human behaviors. If you only see cybersecurity as a block-and-tackle function, you have already lost.

Finding Rocks
Some believe it is imperative to spend inordinate amounts of time and resources attempting to find every possible weakness, so the vulnerabilities can be closed. This is a near impossible task that nobody has ever achieved. It is a sinkhole of resources that can consume everything put forward.

The truth about vulnerability scanning strategies it that most are never exploited. Therefore, only a subset actually pose a material threat. Prioritization is key because it is a waste to commit security resources to threats that will never manifest. Just because something is possible, does not mean it will occur.

Consequently, I think the act of trying to find ALL vulnerabilities is a failing strategy, as it will consume far too many resources and still likely result in a compromise. As Fredrick the Great said, “In trying to defend everything he defended nothing”.

But that has not stopped the industry from travelling down this path. For many, it has become a rut of singular focus. The lure is that it seems like familiar territory, similar in nature to other information technology problems, and something which can be explained and partially measured. However, it is a mirage. Appearing just out of reach, it is easy to believe it is attainable. But no matter how far you walk, you never get there. Closing all vulnerabilities to be secure is a mentally satisfying theory, but in practice it is not practical. The emergence of vulnerabilities is tied to complexity and innovation. As long as technology continues to rapidly innovate and become more complex, vulnerabilities will never cease and therefore the process of closing them is never finished. In the end, organizations risk participating in a war of attrition where defenders are consuming vastly more resources than the opposition in a endless cycle of vulnerability discovery.

I would postulate you don’t need to address all vulnerabilities equally. I am not advocating abandoning the search for weaknesses, as there is great value in the exercise, but there should be allowances, prioritization, and permissible tradeoffs! In fact, it must be one part of a greater effort for organizations to manage the cyber risks of their digital ecosystems. It should never be the sole focus of a security group.

Other plans can counter such threats, especially if you are creative and have certain insights…. If you know your enemy, it can suffice to know how they will maneuver. Consider this, a chess player does not need to know all the possible future combinations. Rather, they lay traps, see the board from their opponent’s perspective, and move with insights to how their opponent will likely act. You beat the player, not the chessboard pieces.

The best way to gain an advantage over cyber attackers is to establish a professional, efficient, and comprehensive security capability that will last and adapt to evolving risks. Defenders must minimize the strengths of their opponents while maximizing their own advantages. Strategic planning is as crucial as operational excellence.

5 Recommendations to Build Better Security:

1. Establish clear security goals, measures & metrics, and success criteria. Thinking you will stop every attack is unrealistic. Determine the optimal balance between residual risk, security costs, and productivity/usability of systems

2. Start with a strategic security capability that encompasses a Prediction(of threats, targets, & methods), Prevention (of attacks), Detection (of exploitations), and Response (to rapidly address impacts) in an overlapping and continuous improvement cycle. This reinforces better security over time and positions resources to align with the most important areas as defined by the security goals. Over time each area will improve to support each other and collaboratively contribute to a better overall sustainable risk management posture.

3. Understand your enemy. Threat agents have motivations, objectives, and methods they tend to follow, like water seeking the path of least resistance. The more you align your defenses to these, the more scalable and effective you become. Don’t waste precious resources on areas that don’t affect the likelihood, impact, or threats.

4. Incorporate both technical and behavioral controls. Do not overlook the human element of both the attackers and targets. Social engineering, as an example, is a powerful means to compromise systems, services, and environments, even ones that possess very strong technical defenses.

5. Maintain a vulnerability scanning capability (technical and behavioral) but prioritize the effort to identify and remediate those which pose the greatest risk. That includes most likely avenues to be exploited and ones which can cause unacceptable impacts. Knowing that most vulnerabilities will never be leveraged, this is an opportunity to effectively use resources and reallocate to other efforts where it makes more sense

The Long Game
As organizations approach cybersecurity challenges, they should consider a more balanced, prioritized, and proactive approach to managing cyber risks. They will get farther, faster, with less resources and more focus. This is not a sprint. As long as technology continues to innovate and be implemented, risks will continue evolve.

In the end, strategic warfare outpaces battlefield tactics.

Interested in more? Follow me on your favorite social sites for insights and what is going on in cybersecurity: LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy blog, and Steemit


The Rise and Fall of ARM Holdings

The Rise and Fall of ARM Holdings
by Daniel Nenni on 05-16-2018 at 7:00 am

Publishing a book on the history of ARM was an incredible experience. In business it is always important to remember how you got to where you are today to better prepare for where you are going tomorrow. The book “Mobile Unleashed” started at the beginning of ARM (Acorn Computer) where a company went from a crazy idea a couple of engineers had for designing a processor from scratch to being a monopoly of processor cores controlling roughly 95% of the world’s mobile electronics today.

ARM book number two book begins with the acquisition of ARM by SoftBank Group Corp, a Japanese multinational conglomerate holding company. Nobody saw this one coming and some people, including myself, still wonder why the sixth-largest telephone operating company (by total revenue of $74.7B) would buy a semiconductor IP company.

Unfortunately, under SoftBank, ARM profits are dropping due to aggressive expansion (headcount and R&D investment). The explanation from SoftBank is that ARM is being positioned to rejoin the public markets in five to seven years to become an even higher profiting company.

According to the recently released 2018 IP Design Report from IP-Nest, ARM royalties are up 17% but ARM licenses are down 6.8% which is a much more troubling trend for future royalty strength. New accounting practices can always be blamed but according to my information the problem is much more a case of a change in company culture and behavior. Compounding that is the rise of a disruptive ARM alternative and that is RISC-V which truly has hit phenomenon status.

We have been covering ARM since the beginning of SemiWiki in 2011 with 262 blogs published that have been viewed 1,617,437 times by 13,993 different domains. RISC-V is a recent addition to SemiWiki analytics and thus far we have published 9 blogs that have been viewed 148,047 times by 6,988 domains. You can expect expanded coverage of RISC-V on SemiWiki in the near future for sure. Disruption is coming to the CPU IP market and we will have a front row seat. Disruption is for the greater semiconductor good, absolutely!

You can read more about RISC-V HERE or you can go straight to the member page HERE.

One of the more interesting RISC-V developments is Intel’s investment in SiFive, a RISC-V implementer. SiFive was founded by the creators of the free and open RISC-V architecture to battle the escalating costs of chip design. One of those cost barriers of course is the upfront ARM licensing fee.

We have long led the call for a revolution in the semiconductor industry, and believe SiFive, and our technologies, demonstrate a significant path forward for the industry,” said SiFive CEO Naveed Sherwani. “This investment by Intel Capital will enable SiFive to empower any individual or company to produce a silicon solution that meets their needs, quickly and affordably.”

“RISC-V offers a fresh approach to low power microcontrollers combined with agile development tools that have the potential to help reduce SoC development time and cost significantly,” said Raja Koduri, senior vice president of the Core and Visual Computing Group, general manager of edge computing solutions and chief architect at Intel Corporation. “SiFive’s cloud-based SaaS approach provides another level of flexibility and ease for design teams, and we look forward to exploring its benefits.”

The working title of the book is Rise and Fall of ARM Holdings but it can certainly change to Rise and Fall and Rise Again of ARM Holdings. We will have to wait and see how the CPU IP disruption unfolds over the next year or three so stay tuned.