RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Clarity 3D Transient Solver Speeds Up EMI/EMC Certification

Clarity 3D Transient Solver Speeds Up EMI/EMC Certification
by Tom Simon on 10-25-2020 at 8:00 am

Clarity 3D Transient Solver

Cadence made waves a while back with its innovative Clarity 3D Solver, a FEM solver for near field EM analysis. Now they are shaking things up with their new far field Clarity 3D Transient Solver. System level EMI and EMC analysis has often exceeded the limits of simulation tools, leading to expensive and time-consuming prototype development and measurement. Previous solvers were not architected for parallel processing and suffered from capacity limitations.

EMI and EMC are extremely important in consumer electronics, automotive systems and healthcare devices. Each of these markets can benefit from high accuracy, high capacity far field EM simulation. Medical devices often require optimized wireless communications, especially when interacting with the human body. Likewise, there are strict criteria necessary for device safety. Consumer electronics designs often have to trade off emissions, antenna placement, cooling, specific absorption rates (SAR) and other factors to arrive at a carefully balanced solution.

Clarity 3D Transient Solver

I recently had a call with Cadence Product Management Group Director for Multi-Physics System Analysis Brad Griffin to go over the features and capabilities of their new Clarity 3D Transient Solver. He pointed out that the solver can use GPUs or CPUs to achieve nearly linear scaling of runtime with increased processors. Because it can use hundreds of processors the capacity is practically unbounded. Cadence also offers access to the Clarity 3D Transient Solver through their Cloudburst Platform that runs in the cloud. Access to Cadence tools is optionally provided through the Cloudburst web interface which also enables high throughput file transfers of design data.  This virtually eliminates IT overhead for cloud accessibility for Cadence customers and offers massive scalability when needed.

Brad talked about their pre-announcement engagement with Ultimate Technologies, an EMI/EMC testing company, that was able to compare Clarity 3D Transient Solver results to actual measurement data. The solver was very accurate and enabled their automotive customers to pass EMI testing months sooner, often saving as much as 30% of the design cycle time. Simulation allowed rapid iterations to get to the final passing version of the systems.

According to Brad, board level screening can be accomplished with Sigrity Aurora in-design analysis to catch many common design mistakes that produce excessive EMI. Then the entire system, including enclosures, etc. can be run with high accuracy in the Clarity 3D Transient Solver. Similarly, near field simulation models can be generated in their 3D FEM solver and supplied to the Clarity 3D Transient Solver to assist in getting the most comprehensive results. The solver is fully integrated with Cadence chip, package and PCB design tools. CAD data from 3D mechanical design tools can also be imported into the Clarity 3D Transient Solver. When the MCAD data is merged with Cadence electrical CAD design data, not only can problems be found, but they can be fixed and re-simulated.  Brad emphasized that the combined design and analysis solution makes the EMI signoff task much more efficient than when just point tools are used for analysis.

Cadence has shown a consistent and strong execution of a complete flow for designing electronic products and systems. With the Clarity 3D Transient Solver they have moved into new territory. In many markets, such as automotive, much more complex systems are being designed that have interactions with each other and their operating environment. It’s good to see well integrated solutions that can scale with the complexity of the design challenges. More information about Cadence’s new Clarity 3D Transient Solver can be found on the Cadence website.

Also Read

The Most Interesting CEO in Semiconductors!

Covering Configurable Systems. Innovation in Verification

Tempus: Delivering Faster Timing Signoff with Optimal PPA


Coronavirus Remains Good for Semiconductors but not China

Coronavirus Remains Good for Semiconductors but not China
by Robert Maire on 10-24-2020 at 6:00 am

COVID Semiconductors
  • LCRX puts up a solid QTR and slightly soft guidance
  • China Concerns weigh on future but COVID remains driver
  • Memory spend could be better & help offset China
Solid September Quarter

Lam reported Non GAAP EPS of $5.67 versus $5.19 street and Revenues of $3.18B which was $70M better than street expectations. Lam obviously does a very good job of “managing” street expectations as they always come in above estimates as far back as we can remember. In addition there haven’t been any major surprises in execution that has impacted financials.

Managing and smoothing what has always been a cyclical volatile business has helped overall financial valuation of the company.

Guidance was also very solid at $5.60 +- $0.40 EPS Non GAAP and revenues of $3.3B +- $200M. Guidance is lower than what it would have otherwise been had Lam been able to ship to SMIC. Our view is the impact is relatively minimal as it is only SMIC (for now) and we are sure SMIC was stocking up on tools ahead of time.

Applying for a Chinese SMIC License that will never come

Management mentioned on the call that shipments to SMIC have obviously stopped but they are applying for licenses. The words from management sounded not very hopeful that licenses would be granted by the US government.

We agree and think we can discount future business from SMIC for the foreseeable future. If it ever does come back in the future it will certainly be at lower levels.

The bigger question is does the Chinese embargo spread?

At 37% of business, China is far and away Lam’s biggest customer. Korea is a distant second at 24%. Taiwan (mainly TSMC) is 14%. The US, which is Intel and Micron is a tiny 4% of business.

In our view there is likely a better than 50/50 chance of the China embargo spreading. If not by actions of the US government, then by actions of the Chinese government who may want to put US semiconductor equipment entities on the “unreliable entity” list.

This is obviously an opportunity for companies like Tokyo Electron to take share in China or Chinese homegrown equipment companies like AMEC to take even more share.

We have seen real history of this as Veeco’s China business went from huge to zero as China native AMEC took 100% of their business away in MOCVD tools, which was a near death experience for Veeco.

Even if the embargo doesn’t spread, China business will slow

Even if the US and Chinese governments don’t elevate the tit for tat trade war, we think there will still be a loss of business as China looks elsewhere for more reliable suppliers than US suppliers.

Our sources in China say there is a huge push to get away from US suppliers even though they may not be a lot of alternatives.

COVID remains good for Chips

The move to remote learning and remote business is likely a permanent structural change that isn’t going fully back to whatever it once was.

Long term chip demand trends remain good but we still remain concerned about the final trickle down of global macro economic slow down finally hitting the technology sector and more specifically semiconductors.

We also remain cautious about Chinese customers stocking up on both tools and chips in advance of further trade issues.

Service breaks a billion dollars

Service, which is of increasing importance to all equipment companies, hit a substantial milestone of more than $1B in business in a quarter.

Service will continue to smooth the natural lumpiness of the tool business and being 30% or so increases the ability to smooth out the lumpy side of the business.

The stocks

Given that the stock has been at all time highs and priced to perfection, there could be some weakness on the cloudy China outlook that may scare some investors (for good reason).

Its certainly not unexpected but the potential impact can’t be ignored as it is more than a third of business. While COVID has been a large secular positive driver we still are concerned about the longer term negatives.

We see little to no collateral impact on KLA or Applied Materials or sub suppliers to Lam other than some weakness due to China. The financials are near perfect and execution remains very strong so Lam remains a very strong house in a less certain, but still growing neighborhood.

Also Read:

Is Intel Losing its Memory?

ASML is Strong Because TSMC is Hot!

SMIC Cut off by US Government is Doomsday Scenario for US Chip Equipment Companies


Intel TSMC Update!

Intel TSMC Update!
by Daniel Nenni on 10-23-2020 at 10:00 am

Intel Bob Swan TSMC SemiWiki 1

Based on the Intel investor call yesterday here are some interesting comments Bob Swan made related to Intel outsourcing manufacturing and 7nm progress. Let’s start with the prepared statement:

Bob Swan: Over the last couple of years, we have been focused on three critical priorities; improving our execution to strengthen our core business, extending our reach to accelerate the growth of the company, and continuing to thoughtfully deploy your capital.

We have and do get great benefits from internal manufacturing. We call it our IDM advantage, because it provides us attractive economics, co-optimization of design and process technology development and supply assurance. So as we engage the ecosystem more broadly, we want to preserve some of the advantages of IDM like schedule, performance and supply, as we work with our strategic partners.

Finally, I want to reiterate our intention to continue investing in leading process technology development to bring future process nodes and advanced packaging capabilities to market. This is a powerful force in creating future differentiation for our products and provides tremendous option value for our business.

Me: Clearly Bob has been getting grief about his previous comments on outsourcing to pure-play foundries. There has also been speculation about Intel outsourcing to both TSMC and Samsung which fanned the “Intel will go fabless” flames even further.

As I previously stated in Three Things You Have Wrong About Intel: “The one thing Bob Swan will NOT do however is erase the Intel manufacturing legacy and go fabless. Nobody wants that on their semiconductor CEO resume.”

I also find zero truth in the rumor that Intel will use both TSMC and Samsung. To be successful in outsourcing and competing with AMD on a level playing field Intel needs to be exclusive with TSMC. If you outsource to both TSMC and Samsung you will be on the outside looking in, absolutely.

During the Q&A:

Can you explain how easy it is to transition from TSMC back to your internal manufacturing? How comfortable that is? And would that be for existing type of architecture or more like chiplet type of architectures?

Bob Swan: Yes. It’s a good question. I mean I gave kind of the criteria around should we under what circumstances go out more of schedule predictability performance and of economics if you will the bookend on that — on those three criteria really around one, the ease of portability of our technologies to go out. And I would say, we feel very confident in the ability of us being able to port to TSMC.

And the other bookend is in the event that we go out what’s the ease in which we can port back if we conclude that’s the best alternative for either core products or chiplets. I would just say that we feel increasingly confident that yes in fact, if we conclude going out makes sense that we can. And also that in the event we want to port back in, we can as well. And that’s — those are general observations around the bookend questions.

Me: Hopefully this is based on Bob’s semiconductor terminology naivety. If Intel does in fact “port” designs over to TSMC they will be less competitive than AMD’s direct designs to TSMC in regards to power, performance, and area. Let’s not forget what happened when Apple ported the A9 from Samsung 14nm to TSMC 16nm (ChipGate). It is progress however for Bob to acknowledge they are in fact working with TSMC.

Quick 7nm Update:

Bob Swan: I would say since the last time we spoke, our 7-nanometer process is doing very well. I mean, last time we spoke we had identified an excursion. We had root caused it. We thought we knew the fix. Now, we’ve deployed the fix and made wonderful progress. But nonetheless, we’re still going to evaluate third-party foundry versus our foundry across those three criteria. And the call will be towards the end of this year early next year.

Me: I know that Intel has TSMC PDKs but I have not confirmed any tape-outs as of yet. I do think that Intel will outsource price and power sensitive chips to TSMC to better compete in those markets and to reduce manufacturing expenses. Today Intel has three fabs running 10nm chips. If they do partner with TSMC Intel will only need one 7nm fab which is a very attractive CAPEX reduction (fablite versus fabless) while preserving their IDM status.


CEO Interview: Andreas Kuehlmann of Tortuga Logic

CEO Interview: Andreas Kuehlmann of Tortuga Logic
by Bernard Murphy on 10-23-2020 at 6:00 am

CEO interview

You may remember Andreas from his time at Synopsys, where he led the new Software Integrity Business Unit. He joined Tortuga Logic a couple of months ago to lead the company. Given his background in software security, I was eager to get a CEO interview. Andreas is a EE with background at IBM in the PowerPC and EDA. He directed Cadence Research Labs in the 2000’s, ran R&D at Coverity (static verification for software) and taught at UC Berkeley. Synopsys acquired Coverity in 2014, renaming it as the Software Integrity Group. Andreas, as GM of that group, grew it to a $350M/year revenue business. A big focus for that organization, driven also by multiple acquisitions, is application security. Andreas knows his way around the software security world, making him a promising catch for Tortuga Logic.

What’s your view on hardware security from the software world?

We’ve worried about security in software for a long time and a lot of work has been done to address problems. I’ve seen sophistication in attacks growing over that period, and defenses against those attacks continue to evolve. Hardware security had a flavor for a long time as fairly academic, but when Spectre and Meltdown hit, everyone woke up. Now we’re seeing an exponential growth in new vulnerabilities. Fixing software alone doesn’t help if the hardware can be hacked. I joined Tortuga Logic as CEO for this reason. I see a journey from security problems in hardware to scalable solutions which looks very similar to the journey we travelled in software. These lessons will help us address the challenges surrounding hardware security faster.

Tell me about the software security journey

Cyber security is a very broad, fractured field: endpoint security, network security, container security, firewalls, incidence response – you name it. Application security started moving to the forefront about 10-15 years ago when the traditional firewall started losing its central role in security. As apps moved to the cloud and to your mobile device, the attack surface became the application, itself. We saw application vulnerabilities and attacks, like SQL injection attacks, on an almost daily basis. Some attacks were so simple, high-school kids were doing them.

The financial services industry was among the first to recognize the threat. Responding though turned into a journey. First was to assess the business exposure – potential financial repercussions, brand damage, regulatory compliance – always balancing risk and investment. Early solutions mostly focused on a manual approach. Security experts applied threat modeling, penetration testing and code reviews to identify and remediate vulnerabilities. Later, part of the know-how was automated and embodied in various tools such as static and dynamic application security testing.

When applied as an afterthought, at a time development was almost finished, they would turn up hundreds or thousands of problems for which there was no time to fix. Security had to be pushed left into development. As security testing was adopted in the software coding phase, the security team could increasingly focus on governance, establishing the business requirements and overseeing implementation. This includes defining and enforcing policies that drive every phase of product planning, development and delivery – ensuring security priority was co-equal with other objectives. This is a continuation of the journey at a higher level of maturity.

Where is hardware security?

For many years, hardware security was a fringe activity; hacks generally required physical access to the device and had limited incentives for the perpetrator. Spectre/Meltdown changed the rules. For a number of hardware vulnerabilities, a hacker could now attack a device remotely. Just running software on a server that exploits such a vulnerability could get access to the foundation of the compute platform bypassing all security protections on higher levels. And the problem wasn’t a one-off. As researchers look closer, they are finding more and more vulnerabilities.

Where are we on the learning path for hardware security?

The fundamental process is the same. One needs to approach hardware security from a business  point of view: what is the risk for the company and customers, what is the potential financial exposure, what about regulatory compliance, etc. You then compile business requirements into security policies, which you then then integrate into the product planning, development, testing and delivery process.

Following this approach, one should treat security as another form of signoff, like timing or power. I see hardware companies already working on putting such processes in place. For example, hardware roots of trust offer clear advantages. Reduce the attack surface by localizing the processing of security assets as much as possible. You still must ensure the root of trust is integrated properly and configured correctly, realizing that many of them are configured in software, often during the boot phase. Similar to software, security is a verification problem as much as an architecture problem.

In signoff, it’s important to accept that security is not a clean go/no go like timing. Security signoff is a risk signoff – what risk is a business willing to live with. In other words, how much effort do you invest in prevention versus managing a potential incident? This is why up-front business analysis is so important. There is no 100% secure design – therefore it is equally important to define an incidence response process to address a vulnerability. In software you can release a patch. In hardware – sometimes you may be able to release a firmware or OS update. However, provisioning such patches may be much harder than for application software. The maturity of this process is in a very early state, I think.

Closing thoughts?

It is good to see that many hardware companies are already working on hardware security. I want to encourage them to approach security holistically. Start from a business perspective, then drive requirements on architecture, design, verification and security signoff. There are a number of emerging solutions to help build a comprehensive approach to hardware security,. These range from security IP components, verification products to security services and more. For example, we at Tortuga Logic offer security testing products that interleave in the design verification process and support a rigorous methodology for “building security into” semiconductor designs. Check us out.

Also Read:

CEO Interview: Paul Wells of sureCore

CEO Interview: Wally Rhines of Cornami

CEO Interview: Dean Drako of IC Manage


Flex Logix Brings AI to the Masses with InferX X1

Flex Logix Brings AI to the Masses with InferX X1
by Mike Gianfagna on 10-22-2020 at 10:00 am

InferX X1 PCIe board

In April, I covered a new AI inference chip from Flex Logix. Called InferX X1, this part had some very promising performance metrics. Rather than the data center, the chip focused on accelerating AI inference at the edge, where power and form factor are key metrics for success. The initial information on the chip was presented at the Spring Linley Processor Conference. At that event, the part was reported to be taping out to TSMC’s 16FFC process soon. Fast forwarding to today, Flex Logix has the part back from the fab. Cheng Wang, SVP and co-founder at Flex Logix recently presented silicon results at the Linley Fall Processor Conference. Flex Logix also issued a press release announcing working silicon for the part.  All of this provides technical backup to the fact that Flex Logix brings AI to the masses with InferX X1.

To examine the strategy and technical details behind InferX X1, I had the opportunity to speak with Geoff Tate, CEO and co-founder at Flex Logix. Geoff has quite a storied career in semiconductors. Prior to co-founding Flex Logix, Geoff became a director at Everspin, a leading MRAM company. He continues to sit on this Board. Prior to that, Geoff was the founding CEO of Rambus, where he took the company to IPO and a $2B market cap. Before co-founding Rambus, he was SVP of microprocessors and logic at AMD.

Geoff explained that the chip came back from TSMC about a month ago. It’s running in the lab and the team is in the process of bringing up YOLOv3, a popular object detection algorithm. There is significance to this choice of algorithm.  More on that in a moment. Geoff discussed throughput per dollar as a key metric. He reminded me this part is not for data centers. Rather is it focused on “systems in the real world” where the metrics are quite different. Think of applications such as ultrasound, MRI, factory and retail distribution robots, all kinds of cameras, autonomous vehicles, gene sequencing and industrial inspection. Edge computing is used to describe some of these applications, but the targets for the InferX X1 are larger than that.

Geoff explained that he’s spoken with companies in all these application areas and more, and everyone wants more performance at a lower price. Performance drives the ability to add features, and price drives the ability to expand the market. InferX X1 delivers more performance at a lower price with no compromises. Latency is also low and accuracy is very high – Flex Logix is not cutting corners.  This profile of capabilities is supported by a very small die size – 54 mm2 in TSMC 16FFC. Total design power is 7 – 13W. Note this power spec is for worst case conditions (max junction temperature, max voltage, worst case process). This is a small, low power high performance part.

The chip is populated with Tensor processors that do the inference work. Geoff explained that Flex Logix also provides PCIe boards for those applications that have a form factor that requires this additional hardware. These boards can also be used to do software development on the chip before deploying it in the system.  More on software in a bit.

We then discussed benchmarking. The vast majority of applications today use Nvidia’s Xavier NX or Tesla T4 for edge inference.  The following analyses use two customer models and YOLOv3. The use of YOLO is significant in that there are many customers that use this model in their real applications. That makes it a much more relevant model for benchmarking since it’s performing real-world workloads on significant size data. Other benchmark models are smaller in scale and so don’t really stress the architecture.

First, Geoff reviewed InferX X1 vs. Xavier NX, the smaller or the Nvidia products. Here, the Flex Logix part delivered up to 11X performance gains with a much smaller and less expensive part. Regarding size, Geoff pointed out that Xavier is an SoC that contains other IP such as Arm processors. Comparing just the accelerator portions of the two designs, the Flex Logix part is about 3X smaller.

InferX X1 vs. Xavier NX

Next, InferX X1 was compared to the larger Nvidia Tesla T4. This device is a pure accelerator like the Flex Logix part, so the comparison is more direct. Not surprisingly, the very large Tesla T4 outperformed the InferX X1 in MOST cases. Interestingly, there was one case where the smaller Flex Logix part outperformed the T4.

InferX X1 vs. Tesla T4

The real insight came when the data was presented in a normalized fashion. That is, performance per mm2. This analysis shows how much more efficient the InferX X1 is compared to the Tesla T4. 3X – 18X.

InferX X1 vs. Tesla T4 Normalized

I probed Geoff on the reasons for such a dramatic difference. He reminded me the underlying architecture of the InferX X1 is much more reconfigurable than a hardwired part and the Tensor processors are finer grained. Remember, Flex Logix is also an FPGA company. This is a key part of their architecture’s competitiveness. Another differentiator is the high precision offered by the part. Full precision is used for all operations, there are no short-cuts. Geoff pointed out that some of their customers are doing ultrasound imaging in a clinical setting. Accuracy really matters here.

There are many more enhancements to the architecture that support high throughput and high precision. You can learn more about the InferX X1 on the Flex Logix website.

We concluded with a discussion of software support. Geoff explained that the compiler for InferX X1 is very easy to use. The compiler reads in a high-level representation of the algorithm in either TensorFlow Lite or ONNX. Most algorithm models are expressed in one of these formats.  The compiler them makes all the decisions about how to optimally configure the InferX X1 for the target application, including items such as memory management. In a matter of minutes, an executable is available to run on the part.  There is a lot of optimization going on during the compile process. Geoff described multi-pass analyses for things like memory optimization. Flex Logix appears to have optimized this part of the process very well.

The pricing for InferX1 is very aggressive and supports very high volumes and associated discounts. I talked with Geoff about that strategy. It turns out Flex Logix is not only pricing InferX X1 to be competitive in the current market; the pricing will also support new and emerging markets that require very high volumes. These new markets will be enabled by a device that is as small, low power, high precision and high throughput as InferX X1. Flex Logix brings AI to the masses with InferX X1, and that will open new markets and change history.


Low Energy Intelligence at the Extreme Edge

Low Energy Intelligence at the Extreme Edge
by Bernard Murphy on 10-22-2020 at 6:00 am

Low Energy Intelligence

Intelligence at the edge is a hot topic these days. Not having to go all the way to the cloud to recognize objects, faces, speech and so on. But I find promoters can be rather fuzzy about what they mean by “the edge”. For many, intelligence at the edge means intelligence closer to the edge than the cloud. In a gateway for example. Not actually in an edge device such as an earbud, surveillance camera, agricultural moisture sensor or whatever. Which I get. Surely AI is going to burn too much power to run in a tiny device. Is this a non-problem? Are there really use-cases for low energy intelligence at the extreme edge?

Use-cases

Yes there are. Wireless earbuds are getting smarter with active noise cancellation and 3D audio (what Apple calls spatial audio). Command recognition is still on the phone, but it doesn’t have to be. Smart glasses are making a comeback in support of AR. For surgeries, engine maintenance, guidance around factories, warehouses. Also for consumers, guiding you to features and products while walking around a store or finding a restaurant. For the hearing-impaired, supporting audio-zoom, amplifying sound from a speaker you’re looking at while suppressing other ambient noise. No need to point your phone at the speaker.

Gateways? Yeah, but…

Gateway support would be great if it was always nearby, always reliable, zero latency, close to zero cost, guaranteed privacy, the edge device didn’t have to burn a lot of power uploading raw video/audio and you had unlimited communication bandwidth so that all these smart devices don’t interfere with each other. That’s a lot of ‘ifs’. In reality, communication is imperfect and sometimes blocked. On a gateway your tasks compete with others, which adds different latencies to image and audio feeds. Which then leads to motion sickness or unusability.

Edge devices by design have tiny batteries and gateways are expensive so not always nearby. And the final insult, the more smart devices we have, the more we’ll struggle with bandwidth. I already have a problem with Bluetooth and Wi-Fi collisions. A better solution is what’s already happening in cars. Push more basic intelligence into the real edge. Let your earbuds, smart glasses, hearing aids carry the initial intelligence burden without need for communication. Upload to the gateway only for the hard AI tasks – natural language processing for example.

High performance, low energy requirement

This demands a very low energy budget for intelligent processing, yet still being able to do that processing quickly. Which should sound familiar to my readers. You want to run fast then stop. That starts with a very low-energy trigger, say voice activity detection and trigger recognition (e.g. OK Google). Which then turns on the real engine. You want performance, so vector processing and a lot of MACs to support neural net processing. But you want to run at low frequency (while still delivering good performance) because you’re running on tiny batteries. As soon as the task completes, the engine turns off again.

The next wave?

We’re on very active search for natural interfaces to practical value everywhere. Getting away from not just the laptop but even the cellphone. (Maybe the cellphones become a pocket gateway.) Connectivity and audio VR at our wireless earbuds, tight directional amplification for the hearing impaired, useful AR, utility robots as home vacuums (they need SLAM, also practical in this kind of solution), anomaly detection for home security and factory performance monitoring. Moving everything that now needs a screen and a keyboard closer to the way we naturally function. Which demands low energy intelligence at the extreme edge. CEVA recently introduced their SensPro family of IPs for just this purpose. Vectorized DSP IP, range of MAC options and Tensor Flow Lite micro support. Bringing the goal closer. Check them out.

Also Read:

Combo Wireless. I Want it All, I Want it Now

Wi-Fi Bulks Up

5G Infrastructure Opens Up


Israel and Automotive Safety. More Active Than You May Think.

Israel and Automotive Safety. More Active Than You May Think.
by Bernard Murphy on 10-21-2020 at 10:00 am

autonomy min

CadenceLIVE ran a session recently in Europe which I thought would be interesting to check out, especially around automotive needs. The live sessions were too early/late for me (middle of the night) and sadly the talks I really wanted to hear weren’t recorded. Instead, I dug around for updates on automotive electronics in Europe. ST, NXP and Infineon remain strong in automotive chip, spanning ADAS, drivetrain, infotainment, car networks and electrification. More interesting for my purposes are the Tier 1s. Bosch, Continental and ZF Friedrichshafen rank in the top 5 in the world. Some of these are edging into chip design of one kind or another on their own behalf. Then major OEMs active in ADAS and autonomy – VW/Audi, Volvo, BMW and others. All have significantly invested in Israel for automotive safety – hence my topic.

Automotive development in Israel

Israel is well-established in tech; this is not news. All the majors have development there: Intel, Alibaba, Amazon, AMD, Apple, NVIDIA, Qualcomm, Marvell, Cisco, just picking a few off a long list. Some well-known names in automotive are Mobileye and Waze, both Israeli companies, acquired by Intel and Google respectively. And Argus, also Israeli, acquired by Continental. What may be slight less visible is automaker activity in Israel. GM, VW, Mercedes, Ford and Bosch all have offices in Israel, for advanced electronics and software development. Volvo recently joined this group though the CEVT innovation center based in Sweden. And ZF Friedrichshafen has strategic collaborations in Israel. This country seems pretty important in automotive tech plans for almost everyone.

Veriest talk on verification for safety

Another area of strength for Israel, well known at least in our neck of the woods, is functional verification. After all, Verisity and ‘e’ were born there. So it shouldn’t be a surprise that Veriest (an Israeli design and verification services company I’ve mentioned before) hosted a talk at CadenceLIVE Europe title ‘Is your design functionally safe?”. Per the Cadence stats, this was one of the best attended presentations at the show, unsurprising given all of the above. Perhaps also because they work with a number of the Tier 1 companies. I watched the replay and I have to admit the speaker (Mihajlo Katona) did a really good job. A very disciplined, nuts and bolts walk-through on validating effectiveness of functional safety mitigation techniques. The talk was based on testing memories and logic in an AI processor using soft error injection.

Verifying mitigation techniques

Mihajlo drilled down into ECC for memories in the processors. He outlined the ECC agent model for checking and a constrained random model for injecting permanent faults and transient faults. Their model also contains a prediction unit to compare if the hardware model correctly detects an injected error or not. He mentioned a practical detail I hadn’t considered – if an error is detected, the hardware will likely trigger an interrupt to the main system which will take a little time to process. Then there should be a recovery flow in the hardware. The Veriest verification needed to check that capability functions properly.

For logic verification he assumed BIST, an ISO 26262 recommended mechanism. I guess because in-operation testing is an expected capability in support of fail-operational behavior, also in support of 15-year lifetimes. Here Mihajlo talked about the best way to inject faults (permanent and transient), which should be agreed in discussion with the design team.

A very detailed overview, worth watching. You can register to watch the replay HERE. Once in (you may need to register), look under verification sessions for “Is your design functionally safe?”.

Also Read:

Veriest Meetup Provides Insights on Safety, Deadlocks

Online Verification Meet-up With Intel and Arm!

Python in Verification. Veriest MeetUp


Synopsys Teams with IBM to Increase AI Compute Performance 1,000X by 2029

Synopsys Teams with IBM to Increase AI Compute Performance 1,000X by 2029
by Mike Gianfagna on 10-21-2020 at 7:30 am

Synopsys Teams with IBM to Increase AI Compute Performance 1000X by 2029

Anyone who frequents SemiWiki will likely know Moore’s Law. The prediction made by Gordon Moore over 50 years ago regarding the relentless increase in transistor density and reduction in cost has tracked well for a very, very long time. In recent years, there has been spirited discussion about the end of Moore’s Law. This is a discussion for another time and another blog post. I want to focus here on a significant announcement made today by a company we all know: Synopsys Helps Advance IBM’s Vision of 1,000 Times Improvement in AI Compute Performance during the Coming Decade. There’s a lot packed in that title and I’ll get to the details in a moment as to how Synopsys teams with IBM to increase AI compute performance 1,000X by 2029.

First, I want to point out that a new and different version of Moore’s Law, one that tracks AI performance, is embedded in this announcement. IBM’s vision, as it pertains to AI processor cores, focuses on an improvement in performance of 2.5x each year and a goal of 1,000X overall performance improvement by 2029. The announcement states that IBM Research has realized a gain of twice that in its first year, so they’re off to a good start. If you wonder if Moore’s Law is out of gas, simply focus on this new metric if you’d like something to look forward to.

Back to the details of the announcement. IBM is a massive organization that always thinks big. The AI Hardware Center, announced last year is a perfect example of thinking big. Headquartered in Albany, NY the enterprise is enabling next-generation chips and systems that support the tremendous processing power and unprecedented speed that AI requires to realize its full potential, according to IBM. While IBM is the catalyst for this effort, they are far from alone. Early partners include:

  • New York State
  • SUNY Polytechnic Institute
  • Rensselaer Polytechnic Institute
  • Samsung
  • Applied Materials
  • Tokyo Electron Limited (TEL)
  • Synopsys

A rather impressive list of industry-leading companies. Let’s look at how Synopsys fits in this collaboration. According to the press release, there are three primary areas where Synopsys brings expertise:

  • Multi-die integration in a package, silicon design and verification: Technology here includes: Synopsys 3DIC Compiler, Fusion Design Platform™ and Verification Continuum® Platform, which include the use of state-of-the-art functional verification, prototyping and emulation systems that address the size and scale of the designs being developed, and support for hardware and software co-design and co-analysis methodologies
  • Silicon engineering: Providing software to address critical manufacturing and yield challenges introduced by leading-edge process technologies such as the use of novel materials, gate-all-around 3D stacked architectures, and source and mask creation for EUV technology. The Synopsys Design Technology Co-Optimization (DTCO) solution combines its capabilities to provide more options and help achieve global optimality
  • Silicon IP: Addressing the processing, memory performance and real-time connectivity requirements of AI chips, providing a broad portfolio of silicon-proven DesignWare® IP such as LPDDR5 and PCI Express® 5.0 for a wide array of applications

This is a very broad and technically deep list of capabilities. Synopsys is uniquely qualified to deliver strong, AI-relevant technology across, design/verification, manufacturing/yield and advanced silicon IP. The Synopsys relationship here is not that of a typical EDA company. Synopsys, along with IBM and the rest of the collaborators are aiming to change the course of AI deployment.

When you boil it all down, AI is really the fundamental fuel for innovation in the coming years. You’d be hard pressed to cite any significant technology advancement that wasn’t at least partially enabled by AI. As many know, AI as a technology isn’t new. The term, “AI” dates back to the 1950’s. What has created the recent massive deployment of AI is the ability to run these complex algorithms much faster, with lower power and reduced hardware footprint. Giving the whole process a “turbo boost” of 1,000X will quite likely change the course of history. This effort in general, and the Synopsys contributions in particular are something to watch.

Arun Venkatachar, vice president, Artificial Intelligence and Central Engineering at Synopsys commented, “This is a unique opportunity for Synopsys to be part of a collaborative effort that connects the entire semiconductor value chain. To realize IBM Research’s vision, the AI hardware being designed requires a fundamentally new approach, which needs innovative strategies – from tools, to IP, to workflows and manufacturing. Our involvement with the AI Hardware Center provides a platform for us to help drive the future of AI chip design with a synergistic partner.”

“Together, AI and hybrid cloud will play a critical role in the next generation of enterprise computing and scaling AI, with new hardware solutions as part of a wider effort at IBM Research to envision and realize What’s Next in AI,” said Mukesh Khare, vice president, Hybrid Cloud, IBM Research. “To achieve this, we need to build a new class of AI hardware accelerators that increase compute power without the demand for more energy. Additionally, developing new AI chip architectures will enable companies to dynamically run large AI workloads in the hybrid cloud. Synopsys’ unmatched breadth of experience and technical offering is an extremely valuable asset in this effort.”

IBM considers Synopsys the lead EDA partner on this program. I’ll end on that note. I, for one, will be watching as Synopsys teams with IBM to increase AI compute performance 1,000X by 2029. Here’s to a new, exciting and much improved future.

Also Read:

Digital Design Technology Symposium!

Netlist CDC. Why You Need it and How You do it.

The Big Three Weigh in on Emulation Best Practices


The Most Interesting CEO in Semiconductors!

The Most Interesting CEO in Semiconductors!
by Daniel Nenni on 10-21-2020 at 6:00 am

GTC 2020 Lip Bu Tan

Hands down, without a doubt, the most interesting CEO in semiconductors is Lip-Bu Tan, founder of Walden Capitol and current CEO of Cadence Design Systems. If you want to talk about a man with a plan it’s Lip-Bu Tan.

Before we get into the fireside chat between Tom Caufield and Lip-Bu at the GTC 2020 Virtual event let’s do a quick biography:

Lip-Bu holds a B.S. in Physics from Nanyang University in Singapore, a M.S. in Nuclear Engineering from Massachusetts Institute of Technology, and a M.B.A. from the University of San Francisco. His first big pivot was leaving the MIT PhD program and going to San Francisco for an MBA. As the story goes Three Mile Island happened so a PhD in Nuclear Engineering lost its appeal, their loss our gain.

Lip-Bu founded Walden International in 1987. According to his Wikipedia page He named the firm after the book “Walden” by Henry Thoreau based on the contrarian theme. He started with $20M and now Walden is worth $Billions.

In 2004 Lip-Bu joined the Cadence board of directors and took the CEO job in 2009. To make a long story short, Cadence stock (CDNS) was trading at less than $5 a share at the time and now trades at more than $110.

Lip-Bu also brought diversity to EDA CEOs. In 1995 former Cadence CEO Joe Costello characterized EDA as “Have you seen three big dogs hovering over one bowl of dog food?  It’s not a pretty picture.” Synopsys CEO Aart de Geus is 100% EDA. Former Mentor CEO Wally Rhines is 50% semiconductor (TI) and 50% EDA, and then there is Lip-Bu Tan 100% of everything.  Bottom line: EDA is now three race horses running in the semiconductor derby.

The fireside chat started with a little bit of levity with Lip-Bu saying that he has a twin which is why he is in two places at one time (EDA CEO and Semiconductor VC). Funny because it sure seems that way.

Lip-Bu’s thesis for this talk was something that SemiWiki has been telling us for some time now. We are at the beginning of a Datacentric Revolution fueled by 5G, autonomous driving, Industry 4.0, hyperscale, AI/ML, and data analytics. It’s all about the Data.

According to Lip-Bu 90% of the data we have today has been generated in the last two years, 80% of that data is unstructured, and only 3% has been analyzed/utilized much less monetized.

Semiconductors are collecting data on the edge and processing it in the cloud. Processing data is mostly done on general purpose CPUs and GPUs with FPGAs mixed in. The shift to domain (application specific) processors has already started and will continue.

Next comes storing data in a very efficient way with low latency access. New types of memory from DRAM to HBM and other new memory technologies (QNM) are emerging. Data transmission speeds must also increase and then you must organize and analyze the data for monetization across vertical markets.

Lip-Bu has now invested in more than 30 AI/ML companies.

If you put this talk together with Tom Caufield’s keynote on edge device power and connectivity you will see why the partnership between Globalfoundries and Cadence is a natural fit. I would also hope that Tom gets Lip-Bu involved with GF at the advisory level for future growth and positioning. Nobody knows more about semiconductor IPOs than Lip-Bu Tan, absolutely.

Also Read

Covering Configurable Systems. Innovation in Verification

Tempus: Delivering Faster Timing Signoff with Optimal PPA

Bug Trace Minimization. Innovation in Verification


Automotive Upate at Arm DevSummit from VW

Automotive Upate at Arm DevSummit from VW
by Daniel Payne on 10-20-2020 at 10:00 am

audi etron

Although our family has down-sized to just one vehicle, my dream car is still a Tesla, both because it’s an EV and they have a vision for autonomous vehicles. At the recent Arm DevSummit I watched a fireside chat with Alexander Hitzinger, CEO of Artemis, the skunkworks at Audi, part of the Volkswagen Group.  I knew that Audi had already designed and delivered the very capable EV in the e-tron, and it certainly makes sense that VW would centralize all of their EV and autonomy talent into the Artemis group for efficiency, instead of having each brand duplicate efforts.

Hitzinger has quite the eclectic history, working at diverse companies like: Toyota, Cosworth, Red Bull, Porsche and Apple. He was seated on a campus in Germany where their on-premise cloud services will be placed. Alexander worked on autonomous vehicle technology at Apple, and now at VW he has a chance to get the #1 OEM into autonomous vehicles.

Goals

The big goal at VW is to ship some 20 million EVs in the next 10 years, which is ambitious, because in 2019 there were only 2.1 million shipped according to IEA.

Requirements

Hitzinger talked about how customers wany EV that is convenient, fun to drive, environmentally friendly and that uses sustainable technology. Sounds about right to me, except that I would also add that an EV has to look attractive and not be just a small box where I feel squished when seated inside.

Partners

Just like Tesla partnering with Panasonic for battery technology,  VW is partnering with QuantumScape, a maker of solid-state, lithium-metal batteries based in San Jose.

Software-Defined Vehicle

This was the first time that I heard of the phrase software-defined vehicle, and the idea is to continuously add new features for both HW and SW with over-the-air updates, just like on our smart phones and other Internet-enabled devices. The software initiative at VW is called Car.Software, headed up by Markus Duesmann who will lead some 5,000 IT experts.

Expect to see plenty of 3rd party vendors partner with VW to deliver the complete EV experience.

Autonomous Vehicles

This will be a long process reaching level 4 and finally level 5 autonomy, although the e-tron is already at level 3.

Why Arm and VW

VW needs choices during vehicle development, so working with different partners, even changing partners over time is expected. Specs need to be stable and vendors offer transparency. Arm has been supplying semiconductor IP to the automotive industry for awhile, and as compute requirements grow ever larger, Arm has been creating standards and platforms that allow VW to swap out approaches without having to restart from scratch again.

Why AV

Alexander drives a 911 Porsche when he wants the thrill of a high performance car, but for commuting to work it’s just another VW sedan. AVs promise to give us more free time while commuting and running errands, while decreasing the number of auto accidents.

Summary

This fireside chat at Arm DevSummit just whet my appetite to learn more about what VW was assembling to bring more EVs to market. The newly announced ID.4 looks to be a competitive EV to the Tesla dynasty, while the e-Golf felt too small for my taste. Let’s see if the Artemis skunkworks can pivot VW into a leadership position, chasing down Tesla.