Synopsys IP Designs Edge AI 800x100

IP: Make or Buy?

IP: Make or Buy?
by Paul McLellan on 07-30-2013 at 2:02 pm

A couple of weekends ago I moderated a panel session for the Chinese American Semiconductor Professional Association. No, I had no idea such an organization existed either (at least partially because I’m not Chinese). Dan Nenni was meant to be doing it but he went off to Las Vegas, so I ended up getting the job. On a Saturday no less. It was on the topic of IP: Make or Buy, Ingredients for Success in System-on-Chip.

The panel was a good mix of people from different slots in the IP ecosystem:

  • Yonghua Song of Marvell (a user of IP)
  • Andy Haines of Arasan (a supplier of IP)
  • Will Chen of Finnegan, Henderson, Farabow, Garrett & Dunner (a lawyer mostly concerned with patent issues in semiconductor IP)
  • Yi-Hung Chee of Intel (mostly a company that develops most of its own IP)

Since I was moderating the session I couldn’t really take notes so I’ll focus on Andy Haines’s position since it was actually a reasonably good summary of the industry. Funnily enough I first met Andy when I interviewed at VLSI Technology since he was the EDA marketing guy, so that is a long time ago (and different hair color for us both).

He is focused on the mobile space and the overall trend, although clearly nothing absolute, is for people to buy more IP and build less themselves. Companies that are early adopters of each new process node, such as Qualcomm, build more of their own IP and license less, at least partially because at the point they need the IP it hasn’t been developed or reached a point of maturity that they are confident using it.


However, the real action in smartphones, despite all the press excitement, is not so much at the high end which is a mature market largely in replacement mode. Instead it is in the mid and low range of smartphones where future growth is expected to be strongest. Processor vendors are targeting less power-hungry powerful cores at this market, which has smaller screens and lower computational requirements. However, the peripheral interfaces are pretty much the same (maybe fewer of them in any given phone). Flash is a standard. USB is a standard. DDRx are standards. There are different versions of the standards but you can’t just take a standard and change the performance and the power in any arbitrary way just because the market in India (say) can make do with slower USB.

In fact these standards are not very standard in the sense that they are changing fast. This fact, on its own, makes the challenge of keeping internally developed IP up to date. You can’t just use it again on next year’s chip since some new wrinkles have been added to the standard, but backward compatibility to all the old devices remains important.

But everyone who is not trying to get into 20nm the moment it opens for HVM faces the key question:Why would you build your own IP if there is a good solution available for purchase?

IP these days is not just the RTL or layout that you need. On its own that is not too useful. There is Verification IP (VIP). There are hardware validation platforms. There are device drivers and software stacks. There are not just digital controllers but analog PHYs that interface to signals coming from the world outside the SoC. Plus there is the commitment that as the standards evolve this whole portfolio of views of the IP will evolve and keep up.

All of this means that the answer to the key question is pretty much that you buy IP if you can, and build it if you can’t, either because it is something specific to your own company or process (if you are Intel for example) or because you need the IP faster than the IP industry can deliver it.

Foundries are engaging earlier and earlier with IP suppliers to make sure that even on the most advanced processes, IP is available when the process is ready, and has already been silicon tested in early shuttles before volume production starts. TSMC’s OIP is the most obvious example (this involves EDA flows too, but this was an IP panel).

If IP is not ready in time for the foundry, the most leading edge companies may design their own. But everyone else will just have to wait. It is not simple to design (say) a DDR controller and its PHY and most design teams don’t have the expertise in house even if they wanted to take the make as opposed to the buy route.

So the trend in IP is clear. More is (and is going to be) purchased and less is going to be done in house.

CASPA website is here.


Power and Reliability Sign-off – A must, but how?

Power and Reliability Sign-off – A must, but how?
by Pawan Fangaria on 07-29-2013 at 11:00 am

At the onset of SoCs with multiple functionalities being packed together at the helm of technologies to improve upon performance and area; power, which was earlier neglected, has become critical and needs special attention in designing SoCs. And there comes reliability considerations as well due to multiple electrical and physical effects coming into existence at lower nodes and high density of SoCs. Both, power and reliability of chips need specific focus to analyze and fix issues at the earliest in the design phase.

I was all ears to a webinar presented by N. Kannan of Freescale Semiconductorwho talked in great detail about how they are tackling the challenges of power and reliability in their advanced automotive and networking SoCs; and also on how they are leveraging Apachetools like RedHawk[SUP]TM[/SUP], Totem[SUP]TM[/SUP], CPM[SUP]TM[/SUP], PathFinder[SUP]TM[/SUP], PowerArtist[SUP]TM[/SUP], Sentinel[SUP]TM[/SUP]and others for these purposes. It extremely impressed me knowing about various capabilities of these tools and prudence being exercised at Freescale in designing SoCs. It’s my pleasure to introduce below just a glimpse of those.

Automotive SoCs typically have on-chip Flash, Power Management Unit (PMU) and Analog IPs with multi-power domain and require PCB-Package-Die sign-off for electromagnetic compliance. Networking SoCs, on the other hand, have very large design sizes with multiple cores running at high frequencies, thereby requiring high peak and average power consumption. Package-Die sign-off for power integrity, signal integrity, simultaneous switching output and thermal conduction is a must for these SoCs.

RedHawk[SUP]TM[/SUP]is used for electrical as well as physical modelling. In case of standard cells, modelling of current and capacitance can be done by using Apache Power Library (APL) format. Current de-rating due to voltage drop is captured in the model. In case of Multi-Bit Flops, RedHawk[SUP]TM[/SUP]simplifies the modelling by approximation to the tune of just n characterizations for an n-bit flop. In case of a memory, RedHawk[SUP]TM[/SUP] is able to recognize bit-cell array regions inside the memory and provide more accurate distribution of currents and capacitances.

[Electrical Modelling Options]

Depending on size and nature of the design any or all of these three options can be used for electrical modelling. Option-1 is very simplistic model based on approximation; option-2 is most commonly used where full simulation data is used and option-3 is most expensive on run time where transistor level modelling is done using Totem[SUP]TM[/SUP]. This provides full simulation based analysis and is used in cases such as Flash where uniform current distribution is required.

RedHawk[SUP]TM[/SUP] provides extensive checks for connectivity and reliability such as weak spots in the grid, resistance bottlenecks (through short path tracing), missing vias, EM violations, IR drop bottlenecks, current hot-spots and so on; and provides what-if scenario analysis on IR and EM by using region based power assignment.

As an example, a long wire high resistance is pointed out during PG weakness check and a PAD consuming very high current during PAD placement quality check. PAD placement needs to be optimized with respect to average current ratio.

Similarly, clock buffers may be clustered in a particular region, leading to high switching and power density there, which needs to be fixed.

EM violations in regions of high current sourcing such as PAD locations and hot-spots due to excessive dynamic IR drop in regions with high activity logic clustering need fixes.

CPM[SUP]TM[/SUP](Chip Power Model) is used to do compact abstraction of full chip Power Distribution Network (PDN). Chip, package or board level simulation can be done. Frequency spectrum of chip current demand can be obtained. Time domain analysis can be done for the chip-package. High frequency noise associated with high peak current and corresponding layout regions can be identified and corrected for EM compliance.

Kannan also talks about reliability sign-off done by using Totem[SUP]TM[/SUP]on standard cells. They are also working on using PathFinder[SUP]TM[/SUP]for ESD and current density checks and PowerArtist[SUP]TM[/SUP]for RTL power estimation and reduction. They are working with Sentinel[SUP]TM[/SUP] for Chip Power density and Thermal Map analysis.

The actual delivery of the webinar is very exhaustive, containing details about various problems and the ways to fix them. This presentation was also done in DAC2013. Thanks to Apache and Freescale for making it freely available for our larger community. The webinar, titled “Power, Noise and Reliability Consideration for Advanced Automotive and Networking ICs” can be found here.


Premier Gathering for Semiconductor Professionals!

Premier Gathering for Semiconductor Professionals!
by Daniel Nenni on 07-28-2013 at 6:00 pm

The US Executive Forum hosted by the Global Semiconductor Alliance is coming up on September 25th at the beautiful Rosewood Sand Hill Hotel in Menlo Park. Over 150 executives from the semiconductor and technology industry will attend creating a truly unique opportunity to listen to some of the world’s foremost speakers address topics such as US competitiveness and innovation. More importantly, you get the opportunity to meet the attendees themselves. Take a look at this LIST! CEOs, CTOs, the Who’s Who of the semiconductor industry ready to meet and greet you at the VIP reception.

As if that isn’t enough, the keynote speaker is Dr. Condoleezza Rice. She served as the 66th United States Secretary of State and is currently a faculty member of the Stanford Graduate School of Business and a director of its Global Center for Business and the Economy. Dr. Rice will share her unparalleled expertise on how America’s policies influence international trade relations and global affairs. Following her keynote address, Dr. Rice will engage the audience in an interactive Q&A session.

Clearly WHO you know in this business is as important as WHAT you know so do not miss this opportunity to expand your horizons. GSA is the unifying body of the global semiconductor industry. Membership spans the entire ecosystem, representing the world’s best IDMs, fabless companies, and their suppliers. If your company is not a member it should be.

[TABLE] cellpadding=”5″ style=”width: 100%”
|-
| style=”width: 149px” | Time
| style=”width: 1218px” | Activity
|
|-
| valign=”top” | 12:00 p.m.
| Networking Lunch
| style=”width: 35px” |
|-
| valign=”top” | 12:45 p.m.
| Opening Remarks
|
|-
| colspan=”3″ align=”center” valign=”top” | Connected Services in the Digital Era
|-
| valign=”top” | 1:00 p.m
| Keynote Address
This Keynote Address will capture the technological landscape in the next decade and discuss its impact on consumers’ lives.
David Small, Chief Platform Officer, Verizon Enterprise Solutions
|
|-
| valign=”top” | 1:30 p.m.
| Panel Discussion
This Panel Discussion will address game-changing trends stemming from the Internet boom, with a focus on superior content delivery via today’s burgeoning network and appliances.
|
|-
| valign=”top” |
| Panelists:
|
|-
| valign=”top” |
|

  • Jim Buczkowski, Henry Ford Technical Fellow & Director, Electrical and Electronics Systems Research & Innovation, Ford Motor Company

|
|-
| valign=”top” |
|

  • Guido Jouret, GM, Emerging Technologies & Chief Technology Officer, Cisco Systems

|
|-
| valign=”top” |
|

|
|-
| valign=”top” | 2:30 p.m.
| Networking Break
|
|-
| colspan=”3″ align=”center” valign=”top” | Championing Economic Growth
|-
| valign=”top” | 3:00 p.m.
| Keynote Address
This Keynote Address will unveil insight on enabling private sector growth, innovation and competitiveness amid today’s political and economic landscape.
|
|-
| valign=”top” | 3:30 p.m.
| Panel Discussion
This Panel Discussion will spotlight CEOs from leading semiconductor companies as they discuss amongst other things, the toughest challenges facing our industry today and what reforms can be made to address those challenges.
Moderator: Dr. Aart de Geus, Chairman & Co-CEO, Synopsys
Panelists:
|
|-
| valign=”top” |
|

|
|-
| valign=”top” |
|

|
|-
| valign=”top” |
|

  • Young Sohn, President & CSO, Device Solutions, Samsung Electronics

|
|-
| valign=”top” |
|

|
|-
| valign=”top” | 4:30 p.m.
| Networking Break
|
|-
| valign=”top” | 5:00 p.m.
| Keynote Address and Interactive Q&A with Dr. Condoleezza Rice, Secretary of State (2005-2009)
Widely considered as one of the most influential and powerful people in the world, Dr. Condoleezza Rice will share her unparalleled expertise on how America’s policies influence international trade relations and global affairs. Following her keynote address, Dr. Rice will engage the audience in an interactive Q&A session.
| valign=”top” |
|-
| valign=”top” | 6:00 p.m.
| Closing Remarks and Reception
|
|-

Have questions about this Forum? Please contact:
Nicole Bowman
O 972.866.7579 ext. 129
M 972.814.6866
E nbowman@gsaglobal.org

lang: en_US


Lynn Conway’s Story

Lynn Conway’s Story
by Paul McLellan on 07-28-2013 at 12:08 am

If you are my age, you know that the most influential book in that era on VLSI design was Carver Mead and Lynn Conway’s textbook, blah VLSI blah. Nobody can remember exactly what its title was, it was just referred to as Mead and Conway. In my opinion it was the most influential book on semiconductor design ever. It opened up VLSI design to computer scientists, and since they understood complexity they eventually won out over the EE guys as designs got insanely complicated…10,000 gates, how can we cope?

Lots of us (and even you youngsters who came later) owe a lot to Mead & Conway. So who were they? Carver Mead was a professor at CalTech and Lynn Conway was a researcher at Xerox PARC (Palo Alto Research something-beginning-with-C probably Center). But Lynn Conway had a deep secret that in those days she wasn’t ready to reveal.

She started life as a guy.

S/he had a hugely successful research career as a young researcher at IBM on supercomputing stuff that even today is part of the techniques in the most modern microprocessors, basically the foundations of out-of-order execution. But she was a woman trapped in a man’s body and eventually she decided she had to do the whole thing and do gender reassignment surgery. This was too much for IBM at the time (and let me be the first to point out that today’s IBM would never do this) so they fired him/her. She is private about what her real name was back then to protect lots of people from her family to her then-friends, and uses the name Robert Sanders for that period of her life.

So she was basically screwed, with no family, friends or job. She got some positions as a contract programmer. OK, let’s face it, she was an incredibly good contract programmer. But how do you get from A to B.

The guys at PARC, which was just starting up, noticed her at Memorex, where she was working, and recruited her. PARC in that era was the most innovative computer science location in the world, blowing away Bell Labs and places like that, as well as every academic department from Stanford to MIT. A huge proportion of the top computer scientists in the world worked there. Including Lynn Conway.

I met her last summer at a party for the first time, at Dick Lyon’s house (inventor of the first optical mouse) although we’d exchanged a few emails.

This June, Lynn went to the White House to celebrate LBGT pride month. I live in San Francisco so this is a big deal here every year. But Lynn’s story is something even more of a big deal. The guys at PARC recruited her. Think how different the world might have been if that had not happened. Moore’s law would have advanced and presumably someone else would have tamed the complexity in some way. But Carver Mead and Lynn Conway were just in the right place at the right time to lead people like me into what became the VLSI world.

Lynn’s reminiscences here. Huffpost article by Lynn from last week here.

 


What Applications Implement Best with High Level Synthesis?

What Applications Implement Best with High Level Synthesis?
by Daniel Payne on 07-26-2013 at 3:12 pm

RTL coding using languages like Verilog and VHDL have been around since the 1980’s and for almost as long a time we’ve been hearing about High Level Synthesis, or HLS that allows an SoC designer to code above the RTL level where you code at the algorithm level. The most popular HLS languages today are C, C++ and SystemC. Several EDA vendors have tools in this space, and one of them is Forte Design Systems, founded in 1998.

My question today is, “What applications implement best with HLS?”

Let’s take a look at three application categories that make sense to use an HLS approach.

Digital Media

I love to view or create digital media with my devices:

  • 35mm Canon DSLR
  • MacBook Pro laptop
  • iPad tablet
  • Google Nexus 7 tablet
  • Samsung Galaxy Note II smart phone
  • Amazon Kindle Paperwhite, e-book reader

Each new generation of graphics processing in a tablet is increasing throughput by about 4X, creating smoother experiences. With all of the increase in pixel counts and frame rates, designers must still meet more competitive battery life times which means controlling power throughout the design process.

Hardware acceleration of algorithms is the way to make your consumer devices stand out, instead of using software-based approaches on a general purpose CPU.

If you insist on coding at the RTL level it will simply take you much longer to explore, refine and implement a given algorithm. For example an IC designer coded a motion estimator block using C code in just 1/4th the time compared with using RTL code.

Digital media designers can code in C their algorithms directly:

Security

I’ve done some web programming where sensitive credit card data needed to be encrypted and decrypted so I used a PHP function for the MD5 algorithm. Likewise, an SoC can have this same algorithm in hardware
Here’s a list of security algorithms well suited for HLS:

To get a feel for what the SystemC code looks like for any of these algorithms visit the OpenCores web site which also shows the same code in Verilog.

Wireless

The final application area well suited to use HLS is wireless, driven by consumer electronics devices and the IoT (Internet of Things). Instead of coding in RTL and then getting surprised when the specification changes which can add weeks to your SoC schedule, you can code at the algorithm level and update your code then re-synthesize in days or hours.

Examples of wireless applications include:

Summary

HLS is here to stay and there’s a growing list of applications that will benefit from coding algorithms in SystemC. Forte offers an HLS tool called Cynthesizer that is well-used in the industry for Digital Media, Security and Wireless applications.

lang: en_US


Epitaxy: Not Just For PMOS Anymore

Epitaxy: Not Just For PMOS Anymore
by Paul McLellan on 07-25-2013 at 2:25 pm

At Semicon I met with Applied Materials to learn about epitaxy. This is when a monocrystalline film is grown on the substrate which takes on a lattice structure that matches the substrate. It forms a high purity starting point for building a transistor and is also the basis of the strain engineering in a modern process.

Since holes have lower mobility than electrons, p-type transistors are inherently lower performance than n-mos transistors (which is why before we had CMOS, semiconductor was dominated by NMOS and its variants, n-type transistors with some sort of pull-up resistor/transistor). Since epitaxy improves performance, it was first used for the p-type transistors.


Basically, the source and drain are etched out to form a pit and then the pit is filled by depositing epitaxial silicon (with Applied Materials equipment in most cases). It is actually deposited until the source/drain is proud of the surrounding silicon. Adding small amounts of impurities that are larger than silicon, such as germanium, during deposition induces strain in the lattice which turns out to enhance mobility in the channel. It increases transistor speed but does so, unlike many other things we might do, without increasing leakage and so without increasing static power.

But now, at 22/20nm nodes, epitaxy is needed to get extra performance out of the n-type transistors too, an contribution of around 20% of the mobility.


As usual, almost anything associated with p-type transistors is the other way around for n-type. So to improve performance, strain needs to be tensile. To induce tensile strain in n-type transistors the impurities need to be smaller than silicon, such as carbon or phosphorous atoms. Carbon is 62% smaller than a silicon atom, for example. This increases electron mobility and thus n-type transistor performance.


There are several advantages of epitaxy especially when it is used for both transistor types:

  • precision channel material (since it is not used for source and drain) enhances performance
  • physically raised source and drain keep metal contacts away from channel
  • increased strain on channel increases drive current

Applied are the leader in equipment for epitaxy having shipped over 500 systems (and more every week). Their revenue in this area increased by 80% over the last 5 years. Looking forward, the market is moving towards new channel materials such as III-V elements which have inherently higher electron mobility.

The bottom line message: nMOS epitaxy is essential for faster transistors inside next-generation mobile processors. It boosts transistor speed by the equivalent of have a device node without increasing off-state power consumption. What’s not to like? That is why it is coming to a 20/22nm process near you.

More details here.


System Reliability Audits

System Reliability Audits
by Paul McLellan on 07-25-2013 at 12:09 pm

How reliable is your cell-phone? Actually, you don’t really care. It will crash from time to time due to software bugs and you’ll throw it away after two or three years. If a few phones also crash due to stray neutrons from outer space or stray alpha particles from the solder balls used in the flip-chip bonding then nobody cares.

How about your heart pacemaker? Or the braking system in your car? Or the router at the head of a transpacfic fiber-optic cable? OK, now you start to care.


iRocTech provides audit services at the system level for these sort of situations. However, at the system level, the overall reliability depends, obviously, on the reliability of the various components. One big problem is that the component suppliers are not always co-operative. In some cases they simply don’t know the reliability of their components. But also they tend to want to provide the best possible data so that it cannot be used against them. It is as if we went to TSMC and asked about cell-timing and got given the typical corner and then were told that they hadn’t a clue when we asked about a worst case corner because they didn’t want anyone to know just how slow the process might get.

The problem is actually getting worse. For all the same reasons that we want to put 28nm and 20nm silicon into cell-phones (especially low dynamic and low leakage power, lots of gates, performance), engineers designing implantable medical electronics and aviation electronics want to do so to. But the leading edge processes and foundries are driven by the mobile industry which is probably the industry the least concerned with reliability of all semiconductor end-markets (well, OK, birthday cards that play a tune when you open them, $5 calculators, but these are not really markets). This means that there is not as much focus on reliability and measuring it as the markets outside of mobile require.

The big markets that iRoC works on for system reliability are:

  • networking: not your living room wireless router but the big ones that form internet and corporate backbones. they need an accurate MTBF number
  • automotive: an especially extreme temperature environment (it gets hot under the hood in the desert) and very long lifetime (cars need to work for 15-20 years)
  • avionics: at high altitude (never mind in space) there is 3-400 times the neutron flux that there is at sea level
  • medical: in particular implantable medical. these are very low voltage since you may have to open up someones chest when the battery runs out. and they sometimes get in hostile environments too when you go for an MRI or a CAT scan or get in a plane
  • nuclear plants: historically these have been build with mostly electo-mechanical technology due to neutrons and gamma rays that may be released in an emergency, but they are now retrofitting and need to be able to use electronics
  • military and space: there really aren’t any rad-hard foundries left so commercial components are used more and more, but reliability has to be high in an aggressive environment

What these industries would like to do is to push down their system reliability requirements to the component vendors, but compared to mobile they don’t have enough influence, at least in the short term. A second best solution is to find out the reliability of the components and back it up to a system reliability number.

One end-market that is not on the list is cloud computing. At the level of big data centers, events that we consider rare on our own computer (a disk drive fails, the processor melts, the power-supply blows up) are everyday occurrences and so the infrastructure has to be built to accommodate this. For example, GFS (Google File System) never stores any file on less than three separate disks in different geographical locations (Google is actually prepared for a meteor hit on a datacenter that permanently destroys it without impacting service). I don’t want to imply Google is special, I’m sure Facebook and Amazon and Apple are all the same, just that I know a little more about Google since they have published more in the open literature (and I have done some consulting for them).

Since some measurable problems especially latchup and single event failure interrupt (SEFI) are actually very rare, they are hard to measure. If only a short period of measurement is done then the numbers may look deceptively good. However, the reality is that the mean might be good but the standard deviation is enormous. A better reliability measure than the mean alone is the mean plus one standard deviation. To get that measure to look good, extensive measurement is required to get the standard deviation down to something manageable along with a better estimate of the mean. Single event upsets (SEE) which can be accelerated with a neutron beam (as I wrote about here) are much more common and so the standard deviation is much narrower.


Of course, once there is a measure, the question is what to do about it. It is a well-known proverb that a chain is only as strong as the weakest link. But a corollary is that there is no point in having especially strong links, in particular there is no point in strengthening links other than the weakest. Identifying the lowest reliability component and improving it is how overall system reliability can be improved.

iRoc Technologies website is here.


From Layout Sign-off to RTL Sign-off

From Layout Sign-off to RTL Sign-off
by Pawan Fangaria on 07-25-2013 at 5:00 am

This week, I had a nice opportunity meeting Charu Puri, Corporate Marketing and Sushil Gupta, V.P. & Managing Director at Atrenta, Noida. Well, I know Sushil since 1990s; in fact, he was my manager at one point of time during my job earlier than Cadence. He leads this large R&D development centre, consisting about 200 people at Atrenta’s Noida facility. In fact, they have just moved into a new building, yet to be inaugurated. I will write more about it and various development stories when the inauguration happens.


[Sushil Gupta]

Coming back to Atrenta’s product and technology edge; it was an intriguing discussion on how Atrenta is solving today’s SoC problems. Sushil talked about Atrenta’s SpyGlass being deployed for SoC designs across the complete mobile ecosystem; rightly, as what we have on a PC or laptop today has shifted to the handheld smart phone. And that has been possible with the advent of SoCs where multiple functionalities have been squeezed into the same chip. However it’s not so simple a road to ride as there are tremendous challenges considering the very small window of opportunity for a design, the complexities of verifying and integrating multiple blocks and IPs from different origins, process bottlenecks and physical effects at small geometries, performance-power-area optimization and so on. The only viable option is to reduce long iterative loops in the design flow and introduce shorter and faster loops, early in the cycle, to set the design right. That would significantly reduce the possibility of re-spins and also provide an edge for time-to-market.

So, there comes Atrenta’s philosophy of pulling up the sign-off process at the earliest possible opportunity, i.e. at the register transfer level (RTL). Well, that cannot completely eliminate layout sign-off, however can definitely and significantly reduce long iterative loops from layout to earlier stages and enable the designer to achieve faster convergence of the design. Traditionally, sign-off is done at the last stage prior to fab, i.e. layout.

As is evident, post layout sign-off is too late and too risky.

Atrenta’s guiding methodology is to do RTL sign-off before proceeding further. And Atrenta provides a complete platform for RTL sign-off. That’s amazing!!

As we can see, the platform contains all the ingredients to realise an SoC that includes a complete design flow, IP flow and integration, debug and optimization. In fact, Atrenta has also collaborated with TSMCand provides an IP Kit which validates and qualifies any soft IP as per TSMC process before they are integrated into SoCs.

I will talk more about Atrenta’s individual products/technologies and their capabilities in my future articles. But I must share my remembrance that when I had a first read (about two years ago) of the Atrenta SoC Realization whitepaper, I had talked about it with Sushil in his earlier office. And today, to my excitement, Atrenta has really further strengthened that realization!!