webinar banner2025 (1)

Tcling Your Way to Low Power Verification

Tcling Your Way to Low Power Verification
by Bernard Murphy on 09-11-2019 at 5:00 am

OK – maybe that sounds a little weird, but it’s not a bad description of what Mentor suggests in a recent white-paper. There are at least three aspects to power verification – static verification of the UPF and the UPF against the RTL, formal verification of state transition logic, and dynamic verification of at least some critical inter-operations between the functional logic and power transitions such as correct req/ack handshaking with a turned-off function which must turn-on in order to acknowledge. This third set is where the white-paper introduces a Tcl-based methodology.

The author (Madhur Bhargava – lead MCS at Mentor) first contrasts the new approach with the traditional method to test compliance with any requirement in RTL verification – through adding SV assertions. He acknowledges that a number of verification tools provide support for power verification of this type, but points out that these come with several limitations:

    • Built-in checks don’t cover all the checks you will want to perform, so you’re going to have to complement these with some of your own checks
    • Adding your own checks is not so simple, particularly since you have to be very comfortably bilingual between UPF and SVA
    • And whatever checks you are performing, these runs will be slow because you’ve added a lot more checking overhead to your mission-mode functional checks.

The core idea behind the Mentor approach is to do power-verification checking post-simulation. Immediately you fix the third problem for regular verification regression users, though you still have to run your power checker. Next your power checker runs on the (post-simulation database. This uses a Tcl app based on the UPF 3.0 Information Model APIs, running on top of the waveform CLIs to access simulation data for use in Tcl-based procedures. That fixes the second problem – a power expert verifier no longer has to be multi-lingual; Tcl know-how (and UPF know-how) is all they need.

Finally you need to run your power checkers. These are going to check compliance with power intent by:

    • modeling control sequences/protocols
    • iterating over the design/power domains to access low-power objects
    • accessing the waveform info associated with these objects
    • checking for any mismatch with intent and flagging errors as needed

I find a number of things interesting about this approach. First this means that dynamic power verification doesn’t need to slow down functional debug and can run in a separate thread from that debug. This for me is another example of how elastic-compute concepts are becoming prominent in the EDA world.

Second, it’s good to see more open-minded approaches to verification. There’s a lot of good things in SV/SVA but we don’t have to be compulsive about dynamic verification only having to work through that channel. Exporting the data to Tcl-based apps for verifying power intent written in Tcl is a natural extension.

Madhur wraps up with some limitations to the approach. First the UPF 3.0 model has no concept of connectivity so checks requiring knowledge of source to sink paths (as one example) cannot be coded within the standard. I think this is purely a standards issue; providing a Tcl API to design connectivity is a problem that has been solved a long time ago. The committee just needs to find a mutually agreeable way to incorporate appropriate APIs.

He also mentions that an SVA assertion can abort a simulation run as soon as an error is found whereas this approach, being post-sim-based, will not trigger an abort. I think this a minor consideration. If simulation regressions are also going for functional verification, that’s not necessarily a bad thing – much of the simulation may still be useful after that event. Power intent debug continues on a parallel path and power bugs can be fed back and rolled up with all other issues, as discovered.

You can read the white paper by registering HERE.


GM’s Dashboard Surrender

GM’s Dashboard Surrender
by Roger C. Lanctot on 09-10-2019 at 10:00 am

There was a time when General Motors’ OnStar offering was readily and clearly understood by consumers for the fact that it did one thing incredibly well: summoning assistance in the event of a vehicle crash and airbag deployment. This now-23-year-old connected car solution still stands strong as a powerful brand-defining statement in the midst of an industry-wide muddle over what a connected car is and does.

Sadly, in recent years, GM has become fixated on “monetizing” vehicle connectivity – mining the vehicle connection for value propositions related to contextual marketing and subscription-based services. In the process, the OnStar brand has been muscled aside by competing value propositions ranging from Maven (mobility) to Marketplace (contextual marketing), MyLink (in-dash apps and smartphone projection), and embedded Wi-Fi from AT&T.

Recently, GM cut back the OnStar free trial to one month and terminated the personal minutes that allowed consumers to make phone calls directly over their embedded OnStar connection – not for free, of course. Interestingly, OnStar is zigging here (cutting the length of the free trial period) while the rest of the industry (VW and others) are expanding free trials from months to years. The cutback of personal minutes, while logical in the context of Bluetooth-connected smartphones removes one of the charming novelties of the OnStar solution – and what once represented the first hands-free in-car phone solution.

Most recently GM has offered unlimited data to select Cadillac owners for $15/month, throwing in three months of free access to Pandora Premium as a deal sweetener. This offer shows GM desperate to monetize the Wi-Fi capability of its OnStar embedded solution – ignoring the reality that most Cadillac owners likely have Wi-Fi access via their smartphones and won’t need or be interested in such an offer.

This offer is particularly ironic in the context of the personal minutes termination. It makes sense to end personal minutes given the availability of Bluetooth smartphone connections. By the same token, pushing Wi-Fi from the device built into the car (when Wi-Fi is already built into most smartphones) makes little or no sense – regardless of whether three free months of Pandora Premium are thrown into the offer.

The reality is that the Cadillac offer of three free months of Pandora actually looks pusillanimous relative to Porsche’s offer of three years’ worth of free streaming of Apple Music in the new Taycan. The bottom line is that GM is muddying up the value proposition of the OnStar brand with meaningless adjustments and penny ante offers that don’t speak to the core value proposition of the platform.

OnStar has always been about safety. At a time in the market when insurance companies and Tier 1 suppliers and even highway infrastructure companies are all looking for ways to reduce highway fatalities and reduce collisions by leveraging vehicle connectivity, GM’s OnStar has shifted its focus to streaming apps, Wi-Fi, and in-vehicle advertising and vehicle-based commerce.

Just in the past two weeks news has arrived of GM adding SiriusXM’s 360L in-dash interface and, as of yesterday, upcoming plans to deploy the Google Automotive Services platform. These systems will join the existing in-vehicle integration of Apple CarPlay, Android Auto, and GM’s own MyLink smartphone-based app platform – plus Xevo’s Marketplace.

All of this means the integration of multiple voice interfaces, multiple payment platforms (Xevo plus SiriusXM), a half dozen or more user interfaces, multiple navigation options, a half dozen traffic information sources (at least two options available within SiriusXM alone), and a data exchange free-for-all magnified by GM’s separate $25M investment in Wejo – which has begun acting as a GM connected car data broker. Completely absent from this moshpit of dashboard integrations is a consistent vision of customer engagement and retention with an emphasis on vehicle care and safety-prioritized operation.

Companies such as BMW and Daimler are already several years into vehicle integration programs that enable car-to-cloud-to-car communications of road hazards. Audi and BMW have been pioneering the integration of cellular-based traffic light signal phase and timing communications.  Audi has even added automatic toll-payment integration. Ford, Volvo, Daimler and BMW have joined the European Union’s Data Task Force for integrating vehicle-based data with infrastructure-based data sources to communicate traffic alerts and road hazards – all in the interest of reducing crashes and highway fatalties.

The one place in the car where car companies have the opportunity to interact with consumers and define their brand values and value propositions is in the center-stack display (and maybe in the instrument cluster). While Volvo, Ford, Daimler, BMW, and Audi are staking their claim to safety leadership, GM is capitulating to commercial priorities and seeking to drive in-dash commerce and peddle vehicle data.

This embarrassing lack of strategy and vision marks GM’s complete abdication of its once-vaunted safety leadership. While Ford Motor Company is standardizing advanced driver assist features and opening up vehicle APIs for insurers to offer discounted coverage based on vehicle operation, GM remains a laggard in deploying ADAS tech and a leader in in-vehicle driver distraction.

It’s not really a surprise given that most of the original OnStar team has now either retired, been bought out, or moved on. The DNA of the team that brought the groundbreaking OnStar system – Project Beacon – to the market more than two decades ago is long gone. The division has been renamed Global Connected Consumer Experience. The mission is now an incomprehensible muddle.

A group that once controlled its own hardware and software development – its own destiny – has been gutted. The leadership has been told to fend for itself by generating revenue from a platform that was conceived to assist drivers in need of assistance after catastrophic crashes – not for delivering in-dash coupons.

GM has surrendered its dashboards. This is the saddest turn of all. A leader has become a laggard in pursuit of what the company perceives as the next great challenge. But the original value proposition remains. With 100 people dying on U.S. highways every day and with 1.2M million humans dying on roadways globally every year the priority must be: saving lives, and avoiding crashes.


AI Chip Prototyping Plan

AI Chip Prototyping Plan
by Daniel Nenni on 09-10-2019 at 6:00 am

I recently had the opportunity to sit down with a chip designer for an AI start-up to talk about using FPGA prototyping as part of a complex silicon verification strategy. Like countless other chip designers for whom simulation alone simply does not provide sufficient verification coverage, this AI start-up also believed that FPGA prototyping would be a critical part of a successful chip delivery plan. When millions of dollars are at stake for advanced silicon geometry masks, not to mention the potentially fatal consequences of a startup missing a market window, getting silicon right the first time is key to success. This AI chip designer was running into FPGA prototype platform capacity limitations … and running out of time.

Learn about AI prototyping by attending this webinar replay: How ASIC/SoC Prototyping Solutions Can Help You!  Register now! Complex SoC case study included!

Throughout our discussions I couldn’t help being reminded of the value of a well thought out FPGA prototyping Plan early in the chip development process to minimize nasty surprises late in the design cycle. Considerations like: what chip functionality needed to be prototyped and when, estimated FPGA gate capacities, what prototype performance would be needed, how the prototype would be tested, etc. In addition, FPGA prototypers should consider how could the FPGA prototype will be easily scaled from one phase of prototyping to the next.

We reviewed the AI chip designer’s FPGA prototyping “vision”, and what options were available to complete the FPGA-based verification, and then scale up for customer demonstration platforms and more comprehensive design verification. The conversation revealed that there were three phases envisioned for FPGA prototyping;

· Phase 1 – verification of a “minimum slice” of the AI chip including a few AI processor cores before tapeout

· Phase 2 – an FPGA platform for a “larger slice” of the AI chip including more AI processor cores for early customer demonstrations, preferably before tapeout to assure alignment with customer needs

· Phase 3 – at some later stage, an FPGA prototype of the whole AI chip, where the whole AI chip would require about one billion equivalent ASIC gates from the FPGA prototype! Yes, that’s billion with a “B”.

Phase 1 was the immediate concern for this AI project, and a solution was needed quickly to allow enough time for good AI chip test coverage before tapeout. With minimum resources, start-ups do not have the luxury of major late-stage corrections to any part of their verification strategy, so changing FPGAs to get more gate capacity would require some trade-offs; do less with the available FPGA platform, or change to a bigger FPGA and run the risk of impacting tapeout schedules. FPGA gate capacity scaling can be accomplished by changing to a larger FPGA, or by using multiple FPGAs with the inherent need to partition the design between the multiple FPGAs … each approach has its pluses and minuses when it comes to impacting project effort.

Phase 2 should be scalable from Phase 1, if the Phase 1 FPGA platform provides a smooth path to scale up gate capacity for Phase 2. If Phase 2 calls for multiple FPGAs, and the Phase 2 approach is a linear scale up from Phase 1, which is usually the case with AI chips composed of a large array of identical processors, the choices made for the Phase 1 platform can simplify the Phase 2 solution.

Phase 3, at a billion gates, requires a separate discussion, and traditional emulation is the easy answer. The challenges with emulation are cost and performance. This AI company has considered emulation but found it simply too expense for a start-up. The company thinks it has the technical expertise to build a billion-gate FPGA prototype but has the good sense to admit that such an undertaking would be monumental and not the primary focus of their business. And then there is emulation performance, which sacrifices performance for high design visibility. While an emulator can achieve performances of close to 1MHz, FPGA prototypes are capable of performances of tens of MHz.

S2C’s Prodigy Family of FPGA prototyping solutions provide ample FPGA gate capacity options using either Intel FPGAs or Xilinx FPGAs … the user may choose their favorite brand of FPGA, a decision which is usually based on design tool preferences. The largest Xilinx FPGA is the VU440, which will support 30M to 40M effective logic gates, assuming a conservative gate utilization of 50% to 60%. Prodigy Logic Modules come with one (Single), two (Dual), or four (Quad) VU440 FPGAs, which translates conservatively to a range of available gate capacities from about 30M gates to over 100M gates.

The Prodigy Family also offers a high speed channel for the transfer of large amounts of transaction-level data between the FPGA prototype and a host computer (ProtoBridge), multi-FPGA debugging that allows deep off-chip trace storage (8GB), multi-FPGA trace viewing in a single window (MDM), and a rich family of ready-made daughter cards (80+) for quick assembly of a prototyping context.

S2C has been in the FPGA prototyping business for over 15 years, so their prototyping hardware is solid. S2C pricing for the VU440 Logic Modules starts at under $50K for a Single Logic Module and scales up to the Quad Logic Module. To get a quick S2C quote click here.

Learn about AI prototyping by attending this webinar replay: How ASIC/SoC Prototyping Solutions Can Help You! . Register now! Complex SoC case study included!

Also Read:

WEBNAR: How ASIC/SoC Rapid Prototyping Solutions Can Help You!

Are the 100 Most Promising AI Start-ups Prototyping?

FPGA Prototyping for AI Product Development


For EDA Users: The Cloud Should Not Be Just a Compute Farm

For EDA Users: The Cloud Should Not Be Just a Compute Farm
by Randy Smith on 09-09-2019 at 10:00 am

When EDA users first started considering using cloud services from Google, Amazon, Microsoft, and others, their initial focus was getting access for specific design functions, such as long logic or circuit simulation runs or long DRC runs, not necessarily for their entire design flow. If you choose to use the cloud this way, you allocate virtual machines, and all the files related to the job are uploaded. Once the results are transmitted back to the user, you release the virtual machines, including their associated storage. While this sounds simple, it is likely very inefficient to keep uploading many of the same files with each job. We need to consider persistent storage in the cloud to see how it competes with on-premise design centers.

WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®

Rather than looking to the cloud services as a one-off compared to your internal computing center(s), consider it yet another compute center. Users of ClioSoft’s SOS7 design management platform already have been efficiently managing the shared data between different worksites with great results. Frequently used data is cached locally for greater efficiency. It is also seamlessly managed by SOS7 along with revision control and all the other aspects of design data management. The main ingredient to add to your cloud solution to make this work is persistent storage, such as Google Cloud Filestore (GCF). Recently, ClioSoft worked with Google to obtain hard data to quantify the benefits of this approach.

Setting up the environment was straight forward as ClioSoft documented on Google Cloud Blog site.  ClioSoft than ran through some of its standard benchmarks that simulate typical EDA customer workflows. Some comparative environments were also set up to measure the differences in performance. The results were quite informative and contained some surprises.

Perhaps the first takeaway is that setting up a design center environment in the cloud is not significantly more difficult than setting up an on-premise design environment. Users now need to consider the cost of maintaining their on-premise design centers as opposed to building their design environment n the cloud. Certainly, costs will be shifted from capital expenses to operating expenses. I will leave that to the CFOs to study further.

Focusing on just the file system interactions is important here as obviously performance gains can come from faster CPUs, as well. In general, the simulations show that SOS7 performance on GCP is as good or better as compared to on-premise systems. When an on-premises network uses shared NFS/NAS storage, performance was measured to degrade by taking nearly three times as long. A most notable discovery was that Cloud Filestore provides a near-local drive performance level while providing all the benefits of a shared drive.

Worst case, I think these results show that you will not lose any performance by using cloud computing with persistent storage, at least not with GCP and Google Filestore. I should note that Google also just announced the acquisition of Elastifile. Google describes Elastifile as a pioneer in solving the challenges associated with file storage for enterprise-grade applications running at scale in the cloud. They’ve built a unique software-defined approach to managed Network Attached Storage (NAS), enabling organizations to scale performance or capacity without cumbersome overhead. Google is expected to integrate Elastifile with Google Cloud Filestore following the completion of the merger.

Cloud performance aside, I am anxious to see how the larger EDA companies will evolve their business models to handle the increased usage of the cloud. This issue was a significant point raised by Joe Costello at Semicon West in July (more details in my blog on that topic). Users are likely to want to pay for EDA tools only when they use them, as you would expect with a SaaS model. Hopefully we will have more to share about that topic soon as well.

The usage of the cloud is only going to increase for EDA designers. Thanks to ClioSoft for analyzing the different ways to use the cloud and showing the huge benefit of using persistent storage in the cloud.

WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®

Also Read

IP Provider Vidatronic Embraces the ClioSoft Design Management Platform

56thDAC ClioSoft Excitement

A Brief History of IP Management


Actel TI and TSMC Foundry Woes

Actel TI and TSMC Foundry Woes
by John East on 09-09-2019 at 6:00 am

TSMC was founded in 1987 by Morris Chang.  At about the same time, I was wrestling with the question of whether or not to join Actel.  Morris had been a top executive at Texas Instruments during the period when TI took ownership of the TTL market.  (See my week #8.  Texas Instruments and the TTL Wars)  I have to admit that when I first heard about what TSMC was doing I was unimpressed.  I didn’t like their prospects at all.  (I was wrong.)  I was a VP at AMD then. TSMC was anxious to do business with AMD.  After having a few discussions with Steve Pletcher, their VP of sales at the time, I formed the opinion that TSMC was ready, willing, and able to do business with pretty much anyone who came to them.  (Wrong again!!)  Besides TSMC, there was also a lot of foundry capacity available in Japan.  Most of the Japanese DRAM manufacturers had overbuilt their fab capacity during the boom of ’83 / ’84 and were willing to act as foundry sources for American IC houses. So  — my analysis was — there’s plenty of capacity available to fabless semiconductor companies.  If I join Actel, foundry will be the least of my worries. (Wrong again.  Three strikes and you’re out!!!)

There was a problem.  There were indeed many different companies with fab capacity. Yes.  They were willing to take foundry business.  But!!! Essentially none of them were interested in doing business with a custom process.  They all had standard processes and wanted to sell those wafers —– They didn’t want to spend money developing new processes – especially processes that would be used by just one customer.   To exacerbate the problem, I hadn’t analyzed the process requirements well enough when I was deciding to join Actel.  The process we needed was difficult!  Really,  really difficult!!!  Several new steps were going to be problematic, but one of them  — a requirement to build a very, very thin ONO layer with extremely tight control  — was nearly a killer.  No one wanted that headache in their fab.  To boot, my arguments to the foundries about the huge volumes that we would soon be ordering fell on deaf ears.  They just weren’t buying it.  Despite numerous requests,  TSMC always declined our business, politely telling us that they’d be happy to make wafers for us using any standard process they had.

When I arrived at Actel, we had one firmly established foundry relationship  —  Data General.  DG was a Boston based minicomputer company who had a small, old fab in Sunnyvale.  It was OK for proving out our product,  but it wasn’t at all a modern,  low defect fab.  Our yields would be low and our costs high if they were to be our primary foundry.  It didn’t look like it would be a good long-term relationship for Actel.  I wondered what to do about that.  While I was wondering what to do about it, they solved the problem for me.  They announced that they were shutting down the fab immediately leaving us fabless in the true sense of the word!  To be a fabless company with a complex,  non-standard process that no foundry wants to run is not a good thing!!   ☹☹☹

We had been having discussions with Matsushita Electric Corporation.  (MEC)   They liked the idea of programmable products in general and reasoned that it might be good to have access to our technology.  They had already made a few test runs of wafers in their R&D fab for us by the time I got there.  Their idea was to get a handle on just how hard the process would be to run.  In fact, they had many problems,  but at least the yields weren’t zero. Of course, we didn’t want to give them rights to sell their own FPGAs, but were willing to give them rights to use our technology in their ASICs.  After almost a (very worrisome) year of negotiating and tinkering around in their R&D fab, they agreed to bring up our process in their production fab if we would let them be our sales rep in Japan.  We agreed.

All of that sounded good except that the market for ICs was picking up. MEC’s fabs were filling up.  They had no appetite for spending a lot of effort on our custom process when they had more business than they could handle with their own products and processes.  In the end they did bring up our process. A lot of problems had to be solved along the way,  but they brought it up.  We owe them a lot!  But the rules were pretty simple.  Don’t jerk them around!!!  Requests for process tweaks, line holds,  splits etc would not be tolerated!  Those were tough rules that would affect us greatly in the days to come, because our process was far from stable — it needed a lot of work!  There seemed to be a new process problem lurking behind every bush  — and there were lots of bushes!!!

Parallel to the MEC negotiations,  Actel had also begun negotiations with TI.  The general idea was TI was going to give us some cash and guarantee us foundry capacity.  In return they would get second source rights to the first two Actel families.  The negotiations progressed well for a while,  but then reached an impasse.  By the time I came on board,  negotiations had ceased.  Both sides had walked away from the table. The deal was dead. I joined Actel the morning of December 1,  1988.  Early that afternoon Bill Davidow (Who was on our board.) called me and told me that I needed to go to Texas and patch things up in a hurry.  He was understating it.

So  –  the December 1988 status was this:  TSMC had turned us down. Data General was shutting down. TI was dead, and MEC was not looking good.   We were in trouble!!  We had a mask set and nowhere to run it. Cash was beginning to run low.  We needed to begin work on another round of venture financing,  but I couldn’t imagine anyone giving us money if we didn’t have a fab committed to building our product. A quick but easy to understand summary?

 Looks like we’re screwed!!

I wondered if AMD would take me back.

Happily, TI was willing to restart the talks. I flew to Texas with my fingers crossed.  Wally Rhines was to be the guy on the other side of the negotiating table.  Wally ran the integrated circuit business for TI.  I didn’t know Wally,  but I knew several high-level TI execs by reputation.  Maybe if Wally does another series for Semiwiki he can elaborate,  but the short version  — they were very, very, very tough people.  Fred Bucy?!!  Mark Shepherd?!! Morris Chang?!!  Wow.  You didn’t want to mess with them!!  So,  my mental picture of Wally was a huge man with glowing red eyes, horns, and a tail.  When I flew to Dallas for our meeting, I didn’t plan on liking Wally.  I figured that I was in for a tough day!

To my surprise, Wally turned out to be just the opposite.  Very bright, but also logical,  sensible, and reasonable.  After trading a few pleasantries, it might have taken us an hour or so to get things settled.  They agreed to be a foundry for us.  It took about two years to get the contract drafted and then the process working right, but once we did, they did a good job for us. What a relief.  Thanks, Wally!

But,  while I’m at it,  a little more about Wally Rhines.

In those days, it seemed as though everywhere I went, I ran into Wally talking about digital signal processing (DSP).  Technical conferences.  Wall Street conferences. Instat.  Everywhere!  Wherever I went, there was Wally making presentations about DSP.  To be honest with you, I had only a vague idea of what DSP was and really no idea at all of what it was good for.  To me it looked just like any other microprocessor except that it had an on-board hardware multiplier.  “So what?”  I thought.  The application example he always used was to calculate the five day running average of the Dow-Jones.  Again,  “So what?”

The fact was that TI had missed the boat in microprocessors.  In fact, everybody but Intel and AMD had missed that boat, but it took some companies a long, long time to figure out that they had missed it.  Everybody knew that microprocessors were going to be super important.  My sense was that Wally was only trying to assuage his conscience for that miss by talking about some hypothetical but unlikely upcoming DSP surge that TI would own.  I was wrong. He was right.  He was onto something that it took most of us a long time to figure out.  DSP was the missing link when it came to merging the real world (Analog) with Moore’s Law.

Today DSP is at the heart of all electronic communications  — there’s a huge amount of signal processing done in every cell phone and in all the communications links that exist.  At the end of the day, DSP was what allowed TI to become a 100 billion dollar market cap company. Congrats Wally.  Hope you were on commission!!  Wally later left TI to become CEO of Mentor.

MEC came up to speed and did a nice job for us for 25 years.  TI came up to speed as well and did foundry for us until we bought out their FPGA business in 1995.  In both cases, though, the process was a generation or two behind the state of the art. Xilinx products were always made on state of the art processes.  That was a very serious problem!!!

Next week:  More foundry woes.  How to get access to a state of the art process?

Pictured:  Wally Rhines.

See the entire John East series HERE.

# Bill Davidow, Fred Bucy,  Mark Shepherd, Morris Chang, Wally Rhines, DSP, Matsushita


NATO’s Collective Defense for Cyber Attack Remains Fragile

NATO’s Collective Defense for Cyber Attack Remains Fragile
by Matthew Rosenquist on 09-08-2019 at 3:00 pm

The Secretary-General of NATO, Jens Stoltenberg, stated all 29 member countries would respond to a serious cyberattack against any of the nations in the coalition. The pressing question is will NATO work together with combined forces when one of the members is attacked in an asymmetrical manner with digital technology?

When it comes to cyberattacks, there are many grey-zones that could be manipulated in ever-increasing escalations of warfare. NATO’s Article 5 in the founding charter is known as the “collective defence” commitment which states that an attack on one shall be considered an attack on all. Historically, it has a high threshold. The first time the criteria was met was the 9/11 terrorist attacks of 2001 against the United States.

There is a lot of ambiguity in the shadows of bits and bytes. Does shutting down the banking system or mercantile logistics count as an Article 5 attack? Would significant and prolonged communications and internet disruption count? What if the power was shut off by another nation-state and it caused harm to people? How about disrupting the transportation networks or other critical infrastructure? Such attacks can be localized or nationwide, can cause annoyances or lives to be lost, and could undermine the trust and control of a representative government. There are currently no thresholds of what should be considered to reach the level of an ‘attack’ by another nation.

Identifying the aggressor is another significant problem. The requirements to determine attribution or accountability for the source of any digital attack is highly subjective. It is easy to attribute the origin of tanks, planes, ships, and advancing soldiers to another country. Tracking malicious packets, origins of destructive code, and owners of crypto accounts is not simple. In the electronic world, it is easy to mislead, masquerade, conceal, or implicate others.  The question becomes if an attack was merely criminals or if it was sponsored/coordinated by the government of another nation-state, which is extremely difficult.

All the variables and hidden truths must be uncovered before discussion about equitable response options can be explored. The first fundamental order-of-business will be to determine if collective reactions are limited to the digital domain or if physical attacks can also be part of the joint reaction.

The first fundamental order-of-business in which NATO needs to determine is if the collective reactions are only limited to the digital domain or if physical attacks are permitted. This decision must be clearly understood as it may have even greater ramifications if it leads to an escalation of conventional or nuclear warfare.

Presently there are too many unanswered questions, unknown factors, and doubt. This results in an ineffective policy position.

It is time for NATO to codify the criteria, validation requirements, and allowable responses. Only then can cross-nation training and coordination begin in earnest. There is so much to be done. Until then, Article 5 for cyber attacks is just an idle threat of solidarity. It will take tremendous teamwork to make this a clear and effective deterrent for the NATO coalition.

Matthew Rosenquist, Cybersecurity Strategist -Former Intel Corp, LinkedIn Top10 TechVoices, 180K+ followers, Keynote Speaker, Board Advisor


Chapter Nine – Specialization Inhibits System Level Optimization

Chapter Nine – Specialization Inhibits System Level Optimization
by Wally Rhines on 09-06-2019 at 6:00 am

Solving critical customer problems sometimes isn’t enough.  One of my most interesting experiences came during the development and rollout of a product that was designed to optimize integration of hardware and embedded software.  In this case, the product performed exactly as planned but the plan ignored the organizational complexities that come with specialization of skills in different divisions of a large company.

The product, called ASAP (not a great name but that wasn’t the reason it failed), analyzed a customer’s design at the RTL functional level, along with the embedded software.  It determined where the bottlenecks existed for optimum performance or power of the system.  We found an ideal customer who was designing a portable consumer product that was dissipating 8.5 Watts and wasn’t viable with the required size of batteries. Three engineers had worked for a year trying to reduce the power and had modified the design to dissipate only 6.5 watts, still far from the required 4.5 watts. We analyzed the customer’s design and, within a few hours, generated changes that reduced the power to 4.1 watts, well below the 4.5 watts goal of the customer.  This was done by identifying bottlenecks and automatically synthesizing hardware to substitute for functions that were inefficiently executing in software on an embedded CPU (Figure 1).  The customer was ecstatic about the result and we expected a major sale.

Figure 1 – Automatic analysis and synthesis to achieve reduced power

When we didn’t receive an order, we investigated.  The problem, it turned out, was an internal disagreement.  While the customer engineers agreed that they badly needed our product, they couldn’t agree which group, hardware or software design, would be responsible for using it.  The hardware engineers were adamant that “no software engineer is going to generate hardware in my chip design” and the software engineers were adamant that “no hardware engineer is going to change a single line of my software”.  Amazingly, the disagreement was so strong that they decided not to adopt our product and, instead, to kill the development of their own very promising product.

You might think that this was an extreme example but I’m increasingly convinced that it wasn’t. We experienced the same thing every time we developed products that crossed domains of expertise, from analog to digital, from mechanical to electrical, from software to hardware, from design to manufacturing, etc.  Software tools that appealed to one domain were not accepted by the other domain. (Figures 2 and 3)

Figure 2.  Differences in the way specialized groups do their work make it difficult to provide tools that cross domain boundaries

Figure 3.  Differing standards, metrics of performance, modes of communication and other differences prevent system level optimization

This is a phenomenon that appears repeatedly, especially in large organizations.  There are, however, ways that system level optimization can be achieved.  Some of these are listed in Figure 4. One of the most apparent examples of the evolution from problematic partitions to a successful organization structure was the change in the customer/supplier relationship that evolved with the advent of silicon foundries in the semiconductor industry over the last thirty years. When most semiconductor companies were vertically integrated, the tradeoffs of every new process technology led to major feuds between the design and the manufacturing engineers.  I know this because I had to referee the arguments many times.  A new generation of product required the most aggressive feature sizes possible while manufacturing yield and throughput favored the least aggressive.  A compromise had to be made and it usually was influenced more by politics than by engineering.  With the emergence of silicon foundries, the problem went away.  Now there were suppliers whose success depended upon providing the most aggressive design rules possible in a cost-effective manufacturing environment.  No more politics.  Just an insightful analysis of the manufacturing and design tradeoffs.

Another approach to solving the specialization problem is to form a startup company (Figures 4 & 5).  In a typical early stage startup, partitions of specialization have not yet formed, so the hardware engineer also frequently writes some of the embedded software, or is at least heavily involved in both.  In addition, startups typically have a key technical expert who will be respected by the potential customers’ most valuable development engineers.  The two of them can get together and exchange ideas for the ideal product because the startup engineer is not constrained by finding the solution in only one domain, e.g. in software or hardware.

Assimilating the task of one group into another also removes artificial partitions. This assumes that the new group truly integrates the responsibilities. It could mean making a group leader at a low level responsible for an integrated solution that involves both hardware and software, for example.

Figure 4.  Ways to overcome the barriers of organizational partitions

Figure 5. Removing the interfaces between customer and supplier

Another approach is to move the design to a higher level of abstraction so that tradeoffs can be made among differing specialties, e.g hardware/software, mechanical/electrical, etc. If the abstraction level is high enough, then everyone can speak the same language (Figure 6).

Figure 6.  Abstraction temporarily solves optimization challenges

This works for a while but very quickly, the addition of detail into the design causes a split among specialties and the optimization effort is reduced or ceases entirely. In some industries, a new abstraction layer can be created at a lower level to overcome this problem.  SysML is one example.  AUTOSAR, for the automotive industry, is another.

Another solution is to conduct multi-physics simulation of designs to see the impact of tradeoffs in different domains (Figure 7). Even with this type of simulation data, it’s frequently difficult to determine which design domain should make changes to improve system level performance. As a minimum, however, it provides data for a rational discussion and takes some of the emotion out of the decisions.

Figure 7.  Multi-physics simulation

While these approaches offer potential, one must wonder whether there are any solutions that are universally applicable?  One overarching approach comes from the way a company handles its data management. For years, company managements hoped for the universal workstation that could be used by the many different disciplines— mechanical, electrical, software, etc.  That is not likely to happen.  Engineers need their own ways of working with design and manufacturing data and they are not going to change, nor would it be advisable to do so.  Efficiency in one domain requires different tools and methods of analyzing data that may not be efficient in another domain.

Despite this need for separation of development functions, engineers still need information from other domains to do their jobs.  A systems company needs a centralized database from which groups in different areas of the company can download and upload data for their own work and to access information from other domains.  A good example is the engineer who is developing wiring for a plane or car. Electrical design of the wiring harness requires detailed electrical simulation, analysis of potential sneak paths and optimization of “take up” alternatives of options in the vehicle so that the basic wiring cost is minimized while the wiring harness can be customized for a multiplicity of option combinations in a vehicle.  At the same time, the wiring approach will change to meet the three-dimensional characteristics of the mechanical design of the vehicle.  How does the electrical designer obtain the data needed to determine if a wire bundle will fit through a hole in the frame of the car?  Or how does the designer know the wire lengths in three dimensions? Does the designer import the mechanical database?  Impractical and probably impossible.  An extract of estimated wire tracks and lengths must be exported to the mechanical design environment and then simulated with mechanical models and tools. Similarly, subsets of system design data must be extracted from one design discipline to another throughout the design process evolution.

Figure 8. In an enterprise data base, unique data structures are needed for each type of discipline

Over many years, I have had the opportunity to work with teams to develop and modify products to make them usable by developers in different domains of expertise.  Some of the lessons learned from this experience are illustrated here.

First, it’s important to provide unique data structures and dataases for each discipline. Mentor’s experience with Version 8.0 of our software drove this one home.  Forcing all the users to format their data in a fixed set of predetermined formats creates an inflexible system that doesn’t benefit anyone but the database vendor.  The database needs to be open and flexible.  Beyond Mentor’s own disastrous experience with the fixed data formats of the Falcon 8.0 database, we were later forced to support our Capital electrical architecture software on a Catia set of formats that suffered the same problem as Falcon.  Performance would have been hopelessly compromised, changes to database structures would require a major regeneration and verification of the database software and our product would have been vulnerable to knockoff by the database owner.  Instead, we created a digital flow for our data outside the Catia database. This approach requires working with data base vendors who favor openness.  This has always been a fundamental priority for Siemens Teamcenter and federated data base approaches of other companies but not necessarily for all database providers. Openness was a key compatibility philosophy for the merger of Siemens with Mentor Graphics that made the union successful.

Figure 9.  Don’t burden one discipline with another discipline’s detailed information

As mentioned earlier, there are still many people who believe that disparate design disciplines in a company should all use the same workstations, the same user interface, the same data structures, etc.  This philosophy is driven by the idea that it is good to have a single design and verification environment that transcends the differences in the enterprise. Engineers can then move from group to group with minimal retraining and design information is more easily shared.   Despite support for this concept among the managements of many companies, it rarely, if ever, happens.   Burdening an electrical designer with the overhead of the mechanical, manufacturing, thermal, etc. detailed design information doesn’t seem to work.  The trick is to be able to access the pieces of data from another domain that are needed to do your job in your domain.  Even better is an architecture that lets you export abstractions of your design to another domain to perform tasks not well suited to the domain of your expertise.  This is how electrical wiring is done when the electrical designer needs to make sure his design meets the constraints of the mechanical embodiment of the product (Figure 10).

Figure 10. Enable selective access to the required data; facilitate rapid translation of data formats

Flexibility and openness of the enterprise data base is the most important criterion (Figure 11).  If addition of a new data format requires a major revision of the entire data base system, it’s impractical to wait. Typically, other things are impacted when a major revision of this type is attempted so the data base structure must be designed for flexibility to change some formats without having to reverify the entire database system.

Finally, the more a design environment feels familiar, the more likely the development engineers will create good products (Figure 12)

Figure 11.  Make sure the enterprise data management has the flexibility to add or change data formats selectively without re-verification of the entire data base management system

Figure 12.  Developers have enough to worry about without adapting to changes in their design environment and support

Although the “lessons learned” provide guidance for how data bases and design environments should be structured, few large corporations have been able to implement the level of interoperability between disciplines that they would like.  Figure 13 is still a hope rather than a reality.  Even if the commercial databases and design software provide the capability for data to be accessed and analyzed from functional domain to functional domain, system optimization would still require that compromises be made in one domain to achieve the optimum result at the system level.  Perhaps this is why systems companies who find ways to overcome this challenge have traditionally achieved higher operating margins than component companies.

Figure 13.  Specialization in large enterprises can be a strength, rather than a burden.  Development environments that maintain the needed specialization by discipline while affording access to data in other domains leads to the most productive enterprise

It’s likely that success will evolve application by application.  The case of electrical wiring of cars and planes reached such a critical level that integrated solutions evolved among the electrical, mechanical and manufacturing domains. Other applications are reaching a critical point where system optimization can only be achieved in an environment where multi-domain tradeoffs can be made. Making these tradeoffs at the highest possible level of abstraction is most likely to produce an optimum result and is also most likely to facilitate compatible development in the diverse functional domains of the corporation.


TSMC OIP Overview and Agenda!

TSMC OIP Overview and Agenda!
by Daniel Nenni on 09-05-2019 at 6:00 am

The TSMC Symposium and OIP Ecosystem Fourm are the most coveted events of the year for the fabless semiconductor ecosystem, absolutely. In my 35 years of semiconductor experience never has there been a more exciting time in the ecosystem and that is clear by the overview and agenda for this year’s event. I hope to see you there:

The TSMC OIP Ecosystem Forum brings together TSMC’s design ecosystem companies and our customers to share practical, tested solutions to today’s design challenges. Success stories that illustrate TSMC’s design ecosystem best practices highlight the event.

More than 90% of last year’s attendees said that, “the forum helped me better understand TSMC’s Open Innovation Platform” and that “I found it effective to hear directly from TSMC OIP member companies.”

This year’s event will prove equally valuable as you hear directly from TSMC OIP companies about how to apply their technologies and joint design solutions to address your design challenges in High-Performance Computing (HPC), Mobile, Automotive and Internet-of-Thing (IoT) applications.

This year, the forum is a day-long conference kicking-off with trend-setting addresses and announcements from TSMC and leading IC design company executives.

The technical sessions are dedicated to 30 selected technical papers from TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies. And the Ecosystem Pavilion feature up to 70 member companies showcasing their products and services.

Date: Thursday, September 26, 2019 8:00 AM – 6:30 PM

Venue: Santa Clara Convention Center

Learn About:

  • Emerging advanced node design challenges including 5nm, 6nm, 7nm, 12FFC/16FFC, 16FF+, 22ULP/ULL, 28nm, and ultra-low power process technologies.
  • Updated design solutions for specialty technologies supporting Internet-of-Thing (IoT) applications
  • Successful, real-life applications of design technologies and IP from ecosystem members and TSMC customers
  • Ecosystem-specific TSMC reference flow implementations
  • New innovations for next generation product designs targeting HPC, mobile, automotive and IoT applications

Hear directly from ecosystem companies about their TSMC-specific design solutions. Network with your peers and more than 1,000 industry experts and end users.

The TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event.  Please register to attend. We look forward to seeing you at the event.

The views expressed in the presentations made at this event are those of the speaker and are not necessarily those of TSMC.

Agenda:

Join the TSMC 2019 Open Innovation Platform® Ecosystem Forum and hear directly from TSMC OIP companies about how to leverage their technology to your design challenges!

08:00 – 09:00 Registration & Ecosystem Pavilion

09:00 – 09:20 Welcome Remarks

09:20 – 10:10 TSMC and its Ecosystem for Innovation

10:10 – 10:30 Coffee Break

REGISTRATION

Please click the paper title to see its abstract:

HPC & 3DIC Mobile & Automotive IoT & RF
10:30 – 11:00
TSMC 3DIC Design Enablement Updates
TSMC
TSMC EDA & IP Design Enablement Updates
TSMC
TSMC RF Design Enablement Updates
TSMC
11:00 – 11:30

Calibre in the Cloud – A Case study with AMD, Mentor & TSMC

Microsoft/AMD/Mentor

Functional Safety Analysis and Verification to meet the requirements of the Automotive market

Texas Instruments/Cadence

Simplify Energy Efficient designs with cost-effective SoC Platform

Dolphin Design
11:30 – 12:00

Optimizing FPGA-HBM in InFO_MS Structure

Xilinx/Cadence

Thermal-induced reliability challenge and solution for advanced IC design

ANSYS

Flexible clocking solutions in advanced FinFET processes from 16nm to 5nm

Silicon Creations
12:00 – 13:00 Lunch & Ecosystem Pavilion
13:00 – 13:30

Chiplets solutions using CoWoS and InFO with 112Gbps SerDes and HBM2E/3.2Gbps for AI, HPC and Networking

GUC

Overcome time-to-market and resource challenges: Hierarchical DFT for advanced node SoC design and production

AMD/Mentor

Developing AI-based Solutions for Chip Design

Synopsys
13:30 – 14:00

Realizing Adaptable Compute Platform for AI/ML and 5G with Synopsys’ Fusion Design Platform

Xilinx/Synopsys

Comprehensive ESD/Latch-up reliability verification for IP & SoC Designs

NXP/Silicon Frontline/Mentor

Reliable, Secure and Flexible OTP solutions in TSMC for IoT, AI and Automotive Applications

eMemory
14:00 – 14:30

HBM2E 4gbps I/O Design Techniques in 7nm & Below Nodes

Open-Silicon

Sensor fusion and ADAS SOC designs in TSMC 16FFC and N7

Cadence

High-Speed Interface IP PAM-4 56G/112G Ethernet PHY IP for 400G and Beyond Hyperscale Data Centers

Synopsys
14:30 – 15:00

Pushing 3GHz Performance of TSMC N7 Arm Neoverse N1 CPU using the Cadence Digital Flow

Cadence/Arm

AWS Cloud enablement of design characterization flows using Synopsys® Primetime® & HSPICE®

Xilinx/Synopsys

Automotive IP Functional Safety – A Verification Challenge

Cadence
15:00 – 15:30 Coffee Break & Ecosystem Pavilion
15:30 – 16:00

Large Scale Silicon Photonic Interconnects for Mass Market Adoption

HPE/Mentor

A New Era of MIPI D-PHY and C- PHY: Automotive Applications

M31

Best practices for Arm Cortex CPU energy efficient implementation flows

Arm
16:00 – 16:30

Photonics Coming of Age: From Laboratory to Mainstream Applications

Cadence/Lumerical

Integrating ADAS Controllers with Automotive-Grade IP for TSMC N7

Synopsys

The Challenges Posed by Dynamic Uncertainty on AI & ML Devices Targeting 16nm, 7nm & 5nm

Moortec
16:30 – 17:00

Accelerating Semiconductor Design Flows with EDA on the Cloud

Astera Labs/AWS

Arm automotive physical IP addresses new feature and functionality demands

Arm

Developing AI Accelerators for TSMC N7

Synopsys
17:00 – 17:30

Best Practices using Synopsys Fusion Technology to Achieve High-performance, Energy Efficient implementations of the latest Arm® Processors in TSMC 7nm FinFET Process Technology

Synopsys/Arm

Cloud-based Characterization with Cadence Liberate Trio Characterization Suite and Arm-based Graviton

Cadence
Optimize SOC designs while enabling faster tapeouts by closing chip integration DRC issues early in the design cycle
MaxLinear/Mentor
17:30– 18:30 Social Hour

 

TSMC pioneered the pure-play foundry business model when it was founded in 1987, and has been the world’s largest dedicated semiconductor foundry ever since. The company supports a thriving ecosystem of global customers and partners with the industry’s leading process technology and portfolio of design enablement solutions to unleash innovation for the global semiconductor industry.

TSMC serves its customers with annual capacity of about 12 million 12-inch equivalent wafers in 2019 from fabs in Taiwan, the United States, and China, and provides the broadest range of technologies from 0.5 micron plus all the way to foundry’s most advanced processes, which is 7-nanometer today. TSMC is the first foundry to provide 7-nanometer production capabilities, and is headquartered in Hsinchu, Taiwan.

REGISTRATION


Carnegie Robotics Case Study: RTLvisionPRO

Carnegie Robotics Case Study: RTLvisionPRO
by Daniel Nenni on 09-04-2019 at 10:00 am

RTLvisionPRO has proven to be an indispensable tool which has greatly improved the productivity and work-flow of our current task: understanding, verifying, and documenting the existing RTL IP library at our company. Consisting of about 500 Verilog and VHDL files, the library has been under development for several years and implements a multitude of image-processing modules and pipelines used in our current sensor and peripheral products. While well-tested and vetted, most modules in the library lack an in-depth level of documentation that is needed for effective re-deployment in new products. Our task has been to carry out a critical review of the library modules and to create a detailed set of documents which convey the deep knowledge necessary to understand their implementation and function.

Webinar: Designing Complex SoCs and Dealing with Multiple File Formats?

As expected, the task has been overwhelming and difficult. Perhaps a complicating factor is the fact that the same FPGA serves multiple purposes in different products and peripherals. Many pins have multiple functions with a significant percentage remaining unused in each implementation. Working through the design by tracing from signals can be quite time consuming. One needs to navigate from file to file tracing signals and nodes in an effort to understand how the design works. This is a tedious task as steps need to be documented while traversing from module-to-module. RTLvision essentially does all the tedious parts of this task. It has a seamless interface with which one can navigate through the design. It keeps track of every step along the way so you can step back and forth and record the appropriate snapshot for documentation. If you go through something unimportant, you can simply go back to where you were before you took the wrong turn. This is an outstanding feature. It also supports bookmarks so you can jump to strategic places quickly.

The typical first step in understanding an existing design is to get a top-level picture of the entire system. This requires a comprehensive capture and representation of the design, all in one place. We were able to import the entire code base into RTLvision using a simple script which listed the directory paths to the libraries and the RTL design files themselves. The tool crunched on the files and returned with a set of top-level designs it found in the provided files. This was surprisingly simple and quick. After the importing stage, the tool builds a database which can be saved to a file eliminating the need to reimport the files. The database of our 500-file design loads in seconds.

The global schematic view given by all FPGA tools is not particularly useful. Essentially, you see a schematic view similar to the figure above. A number of modules are connected by millions of undiscernible connections. Tracing through that is basically impossible. We needed a tool to show the modules but help us trace signals as we work our way through a data path. This is exactly where RTLvision shines. Using the Cone view, you can focus on the parts of the path you are interested in and not display all the clutter from all adjacent nodes and signals. This is a powerful feature of the program. You can see more and more detail by repeatedly clicking on the traced signal. If you click too many times, uncovering uninteresting parts, you can always go back where you were before using the back arrow.

You can quickly generate schematic diagrams which are derived from the RTL files. You can move things around or have the program place components in a sensible orientation to make a clear diagram similar to the one above.

We continue to learn about the other debugging and analysis features of the program. But simply for going through and understand the bulk of a large design, RTLvisionPro has been an indispensable tool.

Omead Amidi, Ph.D.

Carnegie Robotics, LLC
4501 Hatfield Street
Pittsburgh, PA 15201

Webinar: Desinging Complex SoCs and Dealing with Multiple File Formats?


AI, Safety and the Network

AI, Safety and the Network
by Bernard Murphy on 09-04-2019 at 6:00 am

If you follow my blogs you know that Arteris IP is very active in these areas, leveraging their central value in network-on-chip (NoC) architectures. Kurt Shuler has put together a front-to-back white-paper to walk you through the essentials of AI, particularly machine learning (ML) and its application for example in cars.

He also highlights an interesting point about this rapidly evolving technology. As we build automation from the edge to the fog to the cloud, functionality, including AI, remains quite fluid between levels. Kurt points out that this is somewhat mirrored in SoC design. In both cases architecture is constrained by need to optimize performance and minimize power across the system through intelligent bandwidth allocation and data locality. And for safety-critical applications, design and verification for safety around intelligent features must be checked not only within and between SoCs in the car but also beyond, for example in V2x communication between cars and other traffic infrastructure.

What’s driving the boom in AI-centric design? If you’re not in the field it might seem that AI is just another shiny toy for marketing, destined for the garbage can when some inevitable fatal flaw emerges. You couldn’t be more wrong (pace Sheldon Cooper). Tractica estimates that more than 80% of the growth in semiconductors between now and 2025 with be driven by AI applications, whether based on standard platforms (CPU, GPU or FPGA) or custom-built ASIC/ASSPs or highly-specialized accelerators. (The overlay in the image above is Kurt’s addition).

None of this demand depends on major inflection points in the way we live. It’s all about incremental improvements to safety, productivity, security, convenience – all the things technology has been improving for a long time. Whether or not ADAS in cars impresses insurance providers (per my earlier blog), it definitely impresses this owner and apparently many others. I’ve said before I’ll never switch back to a car with lesser ADAS features – they more than pay for themselves. Security for business and industrial plants is already moving from need to constantly monitor security camera feeds to only having to check when unusual movement and/or sounds (dog barking, glass breaking) are detected. Also incidentally this is much better for home security.

Oncologists may be able to more reliably detect potentially cancerous lung tissue based on CT scans (following further analysis). Smart storage solution providers now need to build intelligence into their solutions to better manage housekeeping and other scheduling to maximize throughput and minimize failures. All of this is bread-and-butter stuff, no consumer, industrial or business revolutions required, just adding more automation to what we already have.

McKinsey went a step further than Tractica, breaking down AI hardware growth between training and inference in the datacenter and training and inference on the edge. The biggest contributors are in the datacenter (I’m curious to know how much of that is for cat recognition and friend-tagging in Facebook photos.) And the highest growth is on the edge, growing in 7 years from essentially zero to $5-6B. These two surveys don’t come up with the same numbers for 2025, but that’s not very surprising. What is consistent is the high growth.

Why do Kurt and Arteris IP care about this? They care because the interconnect within an SoC is proving to be at the nexus of AI performance, power, cost and safety for these fast-growing applications. On performance, power and cost, these are big SoCs, smartphone AP-size or even bigger. Which means on-chip networks have to be NoCs and Arteris is pretty much the only proven commercial supplier in that area. Also, AI accelerators are not intrinsically coherent with the compute cluster on the SoC but have to become so to share imaging, voice and other data with the cluster. Arteris IP solutions help here through proxy caches and last-level cache solutions. And with the more advanced accelerators these NoC solutions often play an even deeper role in islands of cache coherence, in mesh, ring or torus architectures and in connecting to external dedicated high-bandwidth memory.

Finally on safety, Arteris IP is well plugged into ISO 26262 (Kurt has been on the working group for many years), they work very closely with integrators and with a certification group to ensure their product meets the letter and the spirit of the standard (increasingly important in ISO 26262-2). And they provide safety-related features within the logic they develop to support ECC, duplication and other safety related requirements with the domain of the logic they supply.

You can learn more by reading this white-paper.