webinar banner2025 (1)

A New Kind of Analog EDA Company

A New Kind of Analog EDA Company
by Daniel Payne on 07-10-2018 at 12:00 pm

My IC design career started out with circuit design of DRAMS, so I got to quickly learn all about transistor-level design at the number one IDM in the world, Intel at the time. In the early days, circa 1978 we circuit designers actually had few EDA tools, mostly a SPICE circuit simulator followed by manual extraction, manual netlisting, manual layout, manual DRC, and of course, manual transistor sizing. In 2018 the scene has changed for the better because EDA companies have offered up lots of point tools to eliminate all of that manual, error-prone work that I had to grok through in the 1970’s.

I’ve been reading up on a new EDA company focused on automating some of the analog IC design challenges, and wanted to blog about it here to share what’s new and different at Intento Design, a company that I visited at #55DAC last month in SFO. The tagline at Intento Design is, “Responsive EDA for Analog IP”. For web sites I know what responsive design means, but for the phrase responsive EDA I needed to dig a bit deeper. A recent press releasefrom June 7th revealed that STMicroelectronics is using a tool called ID-Xplore on their FDSOI circuits in the Aerospace Defense & Legacy Division for:

  • Design exploration
  • IP reuse
  • Use in a Cadence environment
  • Technology independent constraints

OK, that sounds great, but how does ID-Xplore actually work?

Here’s a tool flow diagram to explain what goes into and out of IC-Xplore:

The SPICE analysis portion in the middle immediately caught my eye, yet the company isn’t offering a SPICE circuit simulator, instead they are using SPICE-accurate analysis to help size transistors from a traditional schematic tool like Virtuoso from Cadence. A popular brute-force approach to transistor sizing is to run lots of transient SPICE circuit simulations during the process, which consumes lots of SPICE licenses, but that’s not the Intento approach at all.

With ID-Xplore you first create your transistor-level schematic and then add an intention view, something that every analog circuit designer already has in their head because they know the intention of each circuit block in their libraries along with the specifications and requirements. A Test Bench is another input to ID-Xplore along with PDK models from your foundry.

Analog designers are now ready to invoke ID-Xplore and the tool will help them:

  • Verify that the selected parameters meet the DC bias specification
  • Start sizing transistors to meet the intention view
  • Explore new values of electrical parameters

ID-Xplore comes with a GUI so that circuit designers can visualize performance results, or even see back-annotation of transistor size parameters used in OpenAccess. During exploration you can see a whole range of design curves, or performance profiles as shown below which include sensitivity numbers:

Earlier I talked about the intention view, so let’s dive a bit deeper into that new step. If your analog circuit contains a differential pair, then you click in the schematic and add that intention Likewise, if you need a regulating clamp voltage then specify that intention. Analog designers carry these intentions in their head, and now they have a tool to add them to their schematic as part of an automated tool flow, thus capturing their intention and experience as part of the IP creation process. The ID-Xplore tool takes all of these intentions that are defined by the circuit designer as constraints in a problem that needs to be solved, then uses a SPICE-accurate analysis tool to solve and produce results for analysis.


Cadence Constraint Manager

Related blog – The Intention View: Disruptive Innovation for Analog Design

What else can ID-Xplore help me automate? How about technology porting of analog circuits to a new foundry process model. This works because your initial schematic in Virtuoso can re-load with a new PDK, then the designer can adjust constraints like new PDK choice and supply values. All of your old constraints can be re-used or even updated while porting to a new technology. You can actually port circuits in one afternoon, not days or weeks, wow. FinFET, FDSOI, BD, Bipolar and CMOS technologies are supported by ID-Xplore.

You can probably imagine that a tool like ID-Xplore can create hundreds of different sized schematics quickly, giving you quite a range to choose from in meeting your specifications. There’s plenty of visualization to help pick out your favorite results using the N-Viewer tool, shown below where each green dot represents a sized schematic that meets specs while a red dot is a failed sized schematic:

N-Dimensional Viewer. Red is failing, green is passing.

Related blog – CEO Interview: Ramy Iskander of Intento Design

Another manual task that is automated with ID-Xplore is the concept of centering a circuit design across PVT corners. The analysis in ID-Xplore can be quickly visualized with N-Viewer across PVT conditions, so that the circuit designer can choose what they prefer in terns of which sized schematic best meets the specs across corners.

Summary

I hope that you picked up on all of the cool new automation features now available from Intento Design:

  • Explore circuit sizing on cells or blocks
  • Explore around your DC bias plan
  • Evaluating PDK version changes and how they impact each analog circuit
  • Explore your analog cells exhaustively
  • Confirm or tune hand-sizing results
  • Quickly creating block-size estimates versus performance
  • Train other analog designers about the intention of each of your analog circuits
  • Explore biases and sizing sensitivity

More 55DAC blogs

Related Blog


Drop-In Security for IoT Edge Devices

Drop-In Security for IoT Edge Devices
by Bernard Murphy on 07-10-2018 at 7:00 am

You’re excited about the business potential for your cool new baby monitor, geo-fenced kid’s watch, home security system or whatever breakthrough app you want to build. You want to focus on the capabilities of the system, connecting it to the cloud and your marketing rollout plan. Then someone asks whether your system is architected to be secure. Do you really need to worry about this in your low-cost consumer product? They keep on pushing, bringing up the Mirai botnet attack on the Dyn domain name server in 2016, which DDOS-bombed many major sites. Notably, that botnet was launched through cameras and baby-monitors, among other devices.

Much of the problem was traced back to webcams made by Hangzhou XiongMai Technology, widely and publicly revealed to have weak security. That company made an attempt to correct the problem by recalling up to 10,000 of their devices. Add to that cost the reputational damage from worldwide publication of the discovery (if their devices are easy to hack, maybe creeps can watch my family at home?), likely industry moves to self-preservation and/or regulatory control to shut-out unsafe devices, and you start to see that ignoring security may not be a wise move.

OK, you get it, you’d better add security. But this is not a domain for amateurs. Security is hard – very hard. Software-only security may seem an easy solution but is the most attractive target for hackers; a whole industry (q.v. BlackHat) and hordes of enthusiastic amateurs are dedicated to finding obscure holes in software and building exploits to attack them; hardware-based solutions are generally more robust. Since you’re building hardware anyway, maybe you can throw in a crypto core and TrustZone control on the bus? I’ll say again – security is hard. Your crypto core can be hacked through power side-channels and TrustZone works only if you don’t make mistakes in what and when you connect. Then there are issues with over-the-air (OTA) software updates, secure boot, etc, etc. The list seems endless – and is always growing.

An increasingly attractive solution is to use a security sub-system IP rather than scattering security defenses around your device; put all your security eggs in one basket and watch that basket carefully. In part this reduces the attack surface – the number of different ways an attacker can get at the secret stuff – so there are less places to check. And in part, if a single organization, expert in security, builds the secure subsystem IP, they will be better able to anticipate and more completely defend that subsystem against attacks that are still possible through that reduced attack surface. What happens outside in application-space compute and memory – that’s still up to you. But you can be much more confident that what should be secure will remain secure.

Intrinsix now provide an IoT SoC Solution with drop-in NSA Suite B Security to meet this need. This subsystem provides a Root of Trust – a set of core functions from secure boot and key-store though encryption functions, true random number generator (TRNG) and secure memory. The hardware is also designed for side-channel resistance with methods to defend against timing attacks (through fixed-time operations) and differential power analysis attacks (perhaps through added power noise, though there are other methods). The subsystem runs to just 20k gates and sits on the system bus (e.g. AXI); you interact with it through a set of APIs, for example to convert encrypted data to plaintext and vice-versa.

This comes with a complete stack on top of the hardware, from APIs through boot and provisioning support, tunneling and encryption through multiple methods, and authentication. This truly is a turnkey system. It also comes with a complete development environment: emulator, models and a UVM verification suite.

How good is it? This subsystem has been (self-) certified to FIPS 140-2 compliance (covering to my knowledge the AES, SHA, public key infrastructure and TRNG) and particularly meets the NSA Suite-B secure standard, which is required for US DoD secret-level security. Since it is now in its 3[SUP]rd[/SUP] generation, in production and is being used by the DoD, I guess we can assume it is both production-worthy and meeting DoD expectations. You learn more about this secure subsystem from this webinar, presented by Chuck Gershman and Mark Beal of Intrinsix.


Mentor Calibre Panel

Mentor Calibre Panel
by Alex Tan on 07-09-2018 at 12:00 pm

Getting your tape-out done on time is hard, but can it be made easier? That was the main topic of Mentor’s Calibre Panel held at DAC 2018, attended by a few key players in IC design ecosystem: Bob Stear, VP of Marketing at Samsung represented the foundry side; from the IP side, Prasad Subramaniam, VP of eSilicon for R&D and Technology; and the fabless side, Satish Dinavahi, Sr. Director of Qualcomm India.

At the core of the discussion is the fact that a heterogeneous chip design includes many teams and dealing with evolving specifications. Project scheduling is then becoming one of the critical facets and should be more proactively done rather than reactively made to accommodate the changes. On the other hand, physical verification (PV) is a common segment for all and Mentor’s Calibre has been used as mainstream PV sign-off tool. As such, the panel is being presented with questions related to their tape-out experiences in this space. In the opening remark, Joe Davis, Product Marketing Director at Mentor shared that based on last year DAC customer survey, 50% of tapeouts are late across technology nodes and product types.

Half of tapeouts late, how does that ring to you guys?
Bob agreed that for larger projects which are taking one to two year design cycle that claim rings true. Satish concurred that design teams need to meet the many PPA requirements and absorb few days schedule shift (Qualcomm products mostly for mobile). From IP standpoint, Prasad acknowledged that a project starts with a given tapeout date, but eventually never meets that –instead, it will get adjusted as the project plan being refined. “And by the time you tapeout, it is surely not the same date,” he quipped. So the continuous improvement on the plan should be addressed up-front.

63% of late tapeout was due to PV/DFM. How does it sound to you?
Timing optimization cycle usually drives the schedule according to Satish. Designers will ignore physical convergence until late in the cycle while addressing PPA (performance, power and area) targets. The expectation is that any delay will get absorbed into PV. Prasad made two distinctions on the problem. He categorized them into FinFET and non-FinFET designs. “If I look on non-finFET designs, I would say most of the delay are in the timing,” he said. “The PV cycle … can be resolved quickly. So the delay can be absorbed in PV.” However it would be more complex for FinFET designs. Hence, if project allocation is done the same way as non-FinFET design, then any delay would not be absorbed as PV is more complex.

What sort of strategy do you put in place to deal with it?
Prasad said that we should anticipate the complexity of the chip by allocating adequate time for PV and technology. On top of that, his recommendation is to do verification as early as possible to disclose any potential technology related issues. It could be done at the block as the complete context may not be available yet. Usually, only 2-3 weeks is allocated for completing PV on same technology, but it might be 2-3 months for a new process node.

From the foundry standpoint, Bob mentioned that addressing the front-end first, looking beyond the sales/marketing guys, to understand the technology requirements and mapping that to your product plan with regards to design changes is key. Once you get further on that, try to bring the back-end up to speed on automation, parallelization, different algorithm on the EDA tools, etc.

Satish recommended two approaches: first, by assessing what went wrong out of multiple projects and reinstating option in the methodology and second, by including extra ECO cycles not only on timing but physical verification related issues.

With the long list of complex design rules such as in 7nm, do PD need to memorize all of that to get their job done?
Satish responded that it is hard for PD engineer to memorize everything. In the DRC, more than half are common and basic rules such as spacing or EOL rules are easy to understand. The more complex rules coming from new technologies such as DPT, forbidden rules are not easy to understand and might incur more time when first applied on a project.
Prasad supported the view that only through stumbling across the violation during verification, designer will learn what the rule violation and how to circumvent that in the future.

Custom layout engineer comes out with own method to be DRC clean as not wanting to be slowed down. What does management think and what kind of area penalty is OK?
“The good news about custom design is that because you design a smaller circuit, you have good visibility of everything: the netlist, the layout…the tools are very good,” said Prasad. So in his view, custom design is much better understood than chip design, so (DRC) problems could be easily tackled.

Satish assessment on his past tapeouts shows that flexibility to adjust area early in floorplaning stage is important as well as having PV capability so that DRC cleanliness can be ascertained when cells are moved to accomodate such adjustment. Increasing area by itself does not guarantee a DRC clean outcome.

In PV closure, how do engineers in custom vs digital flow attack the problem differently?
Prasad stated that IP/analog designers designed the block ahead of time and typically it comes DRC clean, although there might be issues at the chip level (i.e., block boundaries). So doing it early in the floorplanning stage for a chip-level mock integration can help address these boundary issues. The other area is metal fills at block and chip level are not necessarily aligned. Their interaction is not clean and this is the kind of chip-level surprises that need to be resolved.

Satish elaborated that IP/block abstractions takes place to reduce large design footprint during physical implementation stage. As such route optimization and subsequent PV is done on abstractions and polygons created during this process and it does not necessarily capture all the possible DRC when full-blown GDS2 design database is used. It is easier to deal with DRC signoff at custom level but harder at full-chip and interfaces of these IPs.

Bob felt that digital side has the benefit from automation compared with the analog side which requires iterative process and sensitive to finetuning.

Digital designer is usually automation driven in nature, focusing on PPA, not doing sign-off until the end. What is required in order to have PV sign-off getting the same way attention as power and clock tree synthesis?
Satish stated that having in-design sign-off within place and route is a good example, as designers like to operate in their own tool environment, which they are familiar with. “They don’t want to go to the GDS2 world, where lots of physical information exist,” he said. “It will enable them to run sign-off quality checks and help them to do sign-off closure.” Bob and Prasad overall agreed that having in-Tool verification will greatly help PV sign-off.

For more information on in-design or in-tool verification using Calibre-RTD, please check my earlier blog HEREor Mentor Calibre Webpage.

More 55DAC blogs


China Trade war is accelerating with First Blood in Chips

China Trade war is accelerating with First Blood in Chips
by Robert Maire on 07-09-2018 at 7:00 am

China didn’t wait around for the US to announce more tariffs or export restrictions, instead it went on the offensive and put an injunction in place to prevent Micron from shipping product into China. Although our view is that Micron will see little impact from this action, the headlines caused the semiconductor sector stocks to sell off biggly.

The prohibition was very thinly veiled as an injunction in an alleged court victory in China between UMC and Micron. Anyone who for a minute believes that this was a true and fair verdict in which Micron infringed upon UMC’s IP and not part of the escalating trade war between China and the US, also believes in Santa Claus and the Easter Bunny.

It also reinforces our view that although China can’t go toe to toe with the US on tariffs due to the trade imbalance, there are many other ways to fight the trade war and inflict damage. There also isn’t a simple retaliatory reaction that the US an respond with as it can with tariffs, so its a smart move by the Chinese.

As we have stated several times over the past months, the “headline” risk to chip stocks due to China trade issues is large and accelerating by the day. There is no de-escalation on the horizon and we feel it will gets worse before it gets better so the stocks still have more downside risk.

Rather than announce technology export restrictions to China as originally planned on June 30th, the administration punted and kicked the can down the road to congress, suggesting congress enact legislation that would enhance CFIUS or something similar. We all know that congress can’t do anything in less than a decade, so that effectively killed the movement to restrict technology exports to China. China on the other hand had no such compunction and effectively shut down Micron in China.

Not much long term impact on Micron
While the injunction against Micron makes headlines and is good PR for China the actual impact is less so. The chip memory market is a giant zero sum game commodity market very similar to the oil market. If Micron can’t sell product in China, it can sell it elsewhere in the globe. Samsung can sell memory in China to make up for it and Micron can serve the customers Samsung gives up to sell to China.

Imagine if the US stopped importing Iranian oil, it would have zero impact on the Iranians as they could sell it to other global customers and the US would import more from other sources…a giant zero sum game of a commodity product. However it would still rile the oil markets and make headlines.

Its unclear if the ban is enforceable anyway
If Apple wanted to use Micron memory in its Iphones made in China will China really stop Foxconn from making Iphones and put one million Chinese workers who assemble Iphones out of a job? Probably not. The collateral damage is very far reaching and we doubt that anyone has taken the time to figure out the ramifications.

Its Deja Vu all over again….
We have seen a smaller version of this movie before with Veeco of the US and AMEC of China as the previous actors. Veeco sued AMEC for infringing, AMEC countersued in China, and won of course, effectively shutting down Veeco in China even though AMEC started the infringement.

UMC ripped off Micron’s IP in Taiwan yet UMC sued in China and won….surprise, surprise. I don’t think there is any court in China that would rule in the favor of a US company on IP infringement if they valued their lives. Basically the courts in China will always rule against the US so using the courts in a trade or IP conflict will always get the desired result.

This should come as no shock to most in the industry as we have seen it before in other countries. AMAT got put in the penalty box by Samsung for going after a little Korean company that had infringed on Applied’s IP.

The moral of the story is you an’t protect your IP in court, you have to prevent it from being ripped off in the first place by tightening security.

The secret recipe
A couple of years ago we had several discussions with a number of people in the industry saying that China would never get anywhere in the chip business as they had no recipe to make memory chips and no one was willing to license it to them. Its clear now that they can get into the chip business simply by stealing the IP much as they got into the fighter airplane business by ripping off US companies that make fighter planes. The chip business is just as important, perhaps even more important than fighter planes so it should come as no surprise that any effort will be made to get the IP by whatever means.

The only bottleneck is the tools

Even though the Chinese can steal the recipe for chips they can’t make them without the tools, especially litho tools. The Japanese and others can sell China tools to make chips but they simply can’t make competitive chips without US manufactured tools. Stopping the export of US semiconductor equipment tools is the only truly effective way of shutting down the Chinese chip industry and sooner or later the administration will figure that out and impose export restrictions. Sooner or later equipment companies will get dragged into the trade war wether they like it or not.

The storm will grow to suck others into the war
We also assume that other companies will get sucked into the trade war. Intel has a fab in China and will undoubtably get involved…though not willingly. The US could stop Intel from exporting technology to be used in its China plant which could be easily copied. Other companies will also get drawn into it

Is Taiwan the next Crimea?
Much as the Soviet Union “annexed” Crimea, so will the Chinese “annex” Taiwan. After all, Taiwan is a rogue “run away” province of China and China underscored that fact by having airlines change Taiwan’s name to be part of greater China. When (we say that because its not “if”) China annexes Taiwan, they will get TSMC and become the dominant leader of the semiconductor industry with no need to steal it. UMC , also of Taiwan , is obviously already part of China as it was the actor that ripped off Micron.
If the US does indeed try to slow down China’s entry into the chip business, China could turn around and accelerate it through Taiwan. We are not sure that the US will come to the aid of Taiwan as we didn’t do a lot over Crimea.

The stocks
We continue to see lots of risk in semi stocks as there is no resolution on the horizon or anywhere else for that matter….it only gets worse by the day. All chip companies are at risk in one way or another. The stocks, if not immediately impacted, will have the sword of Damocles over their heads waiting to get sucked in to the fray or be collateral damage.

The other issue we have is “death by a thousand cuts”. That is that the news continues to flow out in a continuous negative stream that nibbles away at stock valuations day after day.

All in all we are not happy with any of these possibilities and as such see no reason to go against the flow and buy into the group. We would still continue to be light or short the group and the sox index in general.

We still think Micron remains cheap and the sell off was an over reaction but its very hard to “fight the tape”.

Have a happy fourth of July and watch out for continuing fireworks in the chip industry…!!!!


55DAC Trip Report IP Quality

55DAC Trip Report IP Quality
by Daniel Nenni on 07-09-2018 at 7:00 am

This year I signed books in the Fractal booth (compliments of Fractal) and let me tell you it was quite an experience. IP quality is a very touchy subject and the source of many more tape-out delays than I had imagined. As it turns out, commercial IP is the biggest offender which makes no sense to me whatsoever. Even more shocking, one of the big IP vendors is the biggest IP quality offender!?!?

Signing books gives me a chance to network, make new friends, and gather information. Not only did I get to hear the IP discussions in the booth while signing books, I had a few discussions of my own. I also met quite a few Intel people but they were tight lipped about the status of 10nm. One thing they did comment on was the way the Intel board threw Intel CEO BK under the bus but that is another blog.

One clear example of IP QA dysmorphism: A company I spoke with has different IP QA groups using different tools and methods. Mainly because of acquisitions and mergers but still, and we are talking about serious tool inefficiencies and redundant IP QA staff that costs hundreds of thousands of dollars every year. EDA is all about buying tools instead of hiring masses of people and brute forcing design, right? But not in IP QA? It really was shocking to me how many IP QA landmines there are out there just waiting to be stepped on.

It reminds me of a situation we had locally. My wife and I were driving on the highway near us a while back and she pointed out some dead trees from the drought and we wondered when they would be cleared. Sure enough, one fell and hit a car killing the driver. Now the trees are being cleared because that is how our local Government works: “If it ain’t broke don’t fix it” unless of course someone gets killed. The same thing happened to a dam near us during the downpours last year. Apparently, they knew the spillway was not structurally sound but there wasn’t budget to fix it. There was budget however to remove thousands of people from their homes and host them in evacuation centers while they wondered if their homes would be there when they were allowed to return.

Bottom line: The IP QA crises will probably not kill a person or destroy homes, but it could definitely kill a design and get people fired. As an IP Vendor you will lose business because the “Quality IP Vendor” reputation that is earned over the years can be lost overnight.

IP Quality really is a challenge since you do not know who to trust. Writing your own tools and checks without knowing what everyone else (Foundries, IP Vendors, Competitors, etc…) is doing is no longer acceptable in my opinion, nor is just accepting IP from a vendor without independent checks that may or may not be within acceptable foundry guidelines.

The answer of course is buying a tool from an independent company (Fractal) who works with Foundries, IP Vendors, and the top semiconductor companies around the world. Crowdsourcing has truly come to IP and Library QA, absolutely.

Fractal Resources:
SemiWiki Coverage
Company Website
Webinar Replay: IP and Library QA with Crossfire

More 55DAC blogs


SiFive Unveils E2 Core IP Series for Smallest, Lowest Power RISC-V Designs

SiFive Unveils E2 Core IP Series for Smallest, Lowest Power RISC-V Designs
by Camille Kokozaki on 07-06-2018 at 12:00 pm

Fully configurable with advanced feature sets allows for broad applications, including microcontrollers, IoT, wearables, and smart cards

The E20 and E21 add to the growing list of SiFive RISC-V cores addressing the embedded controller, IoT, wearables, smart toys. On June 25, DAC opening day, SiFive announced the availability of its E2 Core IP Series, configurable low-area, low-power microcontroller (MCU) cores designed for use in embedded devices. From their press release, the E2 Series extends SiFive’s product line with two new standard cores, the E21, which provides mainstream performance for MCUs, sensor fusion, minion cores and smart IoT markets; and the E20, the most power-efficient SiFive standard core designed for microcontrollers, IoT, analog mixed signal and finite state machine applications. Additionally, the company announced enhancements to its existing standard E3 and E5 Core IP Series.

The SiFive E20 and E21 are designed for markets that require extremely low-cost, low-power computing, but can benefit from being fully integrated within the RISC-V software ecosystem. Fully compatible with the exact same software stack, tools, compilers and ecosystem vendors as other higher performance SiFive cores, the E2 Series enables these new markets to take advantage of the robust software ecosystem that has been exponentially growing since SiFive first introduced commercial RISC-V cores in 2016. Both cores are fully synthesizable and verified soft IP implementations that scale across multiple design nodes. The new product series provides a variety of new features, including a fully configurable memory map, multiple configurable ports, tightly integrated memory (TIM), fast IO access and a new CLIC interrupt controller for extremely fast interrupt response, hardware prioritization, and pre-emption.

Furthermore, SiFive gives designers the ability to configure a SiFive RISC-V Core Series to their specific application needs, with the ability to fine-tune performance, microarchitectural features, area density, memory subsystems and more within a given Core Series. Customers can either directly leverage the silicon-proven standard SiFive Core IP like the E21 or use it as a starting point for their own customizations.

Naveed Sherwani, SiFive’s CEO commented “I am happy to announce the availability of SiFive’s new E2 Series RISC-V Core IP. Its small area, low power, and low latency interrupt controller make it the obvious choice for many embedded applications. Like all SiFive IP, the E2 Series is fully customizable and can be configured to meet demanding high-performance applications as well as extremely low power and area sensitive applications. E2 Series evaluation RTL and FPGA images are available for free on our website.”

“SiFive’s Core IP is the foundation of the most widely deployed RISC-V cores in the world and represents the lowest risk and fastest path to customized RISC-V based SoCs,” said Yunsup Lee, co-founder, and CTO, SiFive. “Our Core IP Series takes advantage of the inherent scalability of RISC-V to provide a full set of customizable cores for any application – from tiny microcontrollers based on our new E2 Core IP Series to our previously announced, Linux-capable, multicore U Core IP Series.”

In addition to announcing the new E2 Core Series, SiFive also expanded its E3 and E5 Series to support coherent multicore configurations for high-performance embedded applications. In addition to multicore support, the E3 and E5 Series have a new enhanced multiplication unit which allows the E31 and E51 Standard Cores to achieve over 3 CoreMarks/MHz while still using open-source GCC compilers. The E3 and E5 Series are ideal for high-performance, real-time applications such as storage, industrial, modems, and networking.

E Series: 32-bit and 64-bit Embedded Core IP
E5 Series – 64-bit Embedded Core IP
E5 Series embedded cores offer 64-bit performance at 32-bit price and area
E3 Series – 32-bit High Perf Core IP
High-performance 32-bit RISC-V embedded cores, the E3 Series is the most widely deployed RISC-V core in the world
E2 Series – 32-bit Lower Power Core IP *New*
SiFive’s smallest, lowest power, 32-bit core series. The E2 Series is optimized for deeply embedded, microcontroller applications

U Series: 64-bit Linux-Capable Core IP
U5 Series – 64-bit Applications Core IP
The world’s first RISC-V Linux capable core with high performance and optimized for maximum efficiency

When I visited the SiFive booth of the RISC-V pavilion, Drew Barbier, their Sr Applications Engineer was highlighting the E2 series capabilities. He shared with me the following insight:

Some of the salient features of the E2 Series Memory Subsystem are:

  • The E2 core can be configured with 1 or 2 bus interfaces
  • The S-Bus is a highly optimized crossbar allowing fast access to the Tightly Integrated Memory banks (TIMs) and System Port
  • Optional parallel access support to 2 TIM banks with 2 bus interface configurations
  • Atomic instruction RISC-V support for single cycle Read-Modify-Write operation
  • E2 Core instruction and data accesses can target any Port or TIM

The Core Local Interrupt Controller (CLIC) provides:

  • A simplified interrupt scheme allowing low latency interrupt servicing, hardware prioritization, and pre-emption
  • Extreme low latency with support for Vectoring directly to ISR (6 cycles into first ISR instruction, 18 cycles to complete a simple ISR in E2 series pipeline)
  • Interrupt pre-emption capabilities with up to 16 nesting levels, and programmable priority levels within each level
  • An easy to use programmer model with GCC interrupt function attribute and no assembly, multiple software interrupts with programmable priority levels with drivers included in the SDK

For more information on SiFive’s RISC-V Core IP including full datasheets, specifications and app notes, visit www.sifive.com. More details on E2 can be found here. A SiFive E2 Series RISC-V Core IP Launch deck can be found here


TI: Semiconductor Industry History of Innovation

TI: Semiconductor Industry History of Innovation
by Daniel Nenni on 07-06-2018 at 7:00 am

This is the fifth in the series of “20 Questions with Wally Rhines”

Texas Instruments is a remarkable company founded by remarkable people. And Eric Jonsson was one of the most remarkable visionaries of the 20[SUP]th[/SUP] century. He was a renaissance man who created an industry and a fortune by following the needs of the emerging oil exploration industry, then semiconductors and followed up as a statesman, Mayor of Dallas, to take a city from the depression of being the site of the Kennedy assassination to being one of the most innovative centers of commerce in the 21[SUP]st[/SUP] century. Today, Dallas is home to more than 10,000 corporate headquarters including over twenty Fortune 500 companies. But it didn’t happen by accident.

The roots of TI go back to New Jersey in 1939. J. C. “Doc” Karcher developed reflection seisography technology that could be used to reveal the character of strata beneath the earth and to predict the most likely places to drill for oil. The East Texas oil field moved the center of momentum for the oil exploration industry to Texas and so, Geophysical Services moved with it. Eugene McDermott, Cecil Green, Bates Peacock and Eric Jonsson were the founders. Their technology for seismic analysis became a success and they circumvented the costly approach to becoming a public corporation by acquiring a public company and changing the name to Texas Instruments. The original incorporation was ill timed on December 6, 1941, since the attack on Pearl Harbor the next day changed the whole dynamic of business in the United States.

But these were very adaptable people. They had developed electronic analysis equipment based upon vacuum tubes to use for seismic exploration. Interestingly, the U.S. had no shortage of oil during World War II. What the U.S. needed was electronic equipment for the military. Texas Instruments altered its strategy, developed equipment under contract to the Department of Defense and survived the 1940’s. A Navy procurement Lieutenant, Pat Haggerty, was so impressed with this group from Dallas that he accepted a job with them after the war and eventually became the President and CEO of the company. Haggerty was intrigued by the Bell Labs announcement of the development of the transistor in 1947 and decided that the portable military equipment that TI had developed would benefit from the low power and potentially low cost of transistors. As a result, TI became one the early group of companies that paid the $50K required for a license to produce the germanium transistor that had been developed at Bell Labs.

Not trusting to luck, TI also hired Gordon Teal who had been a primary developer of the techniques to purify germanium to make the transistor possible. Surprisingly, TI emerged as a key contender in the race to produce transistors for military and commercial use. Haggerty was convinced that consumer applications would drive the high volume so, while TI’s initial transistor revenue came from the government, he arranged a deal with Regency, a consumer products company, to market a transistor radio. TI developed the radio and Regency made the money. From then on, Haggerty became convinced that the money for the semiconductor revolution lay within its application to end equipment, a conclusion that drove TI’s strategy to both good, and not so good, results.

How can anyone criticize the evolution of a multi-billion dollar behemoth based upon a Haggerty driven decision to license the Bell Labs transistor? I can’t. It was truly a brilliant move. But the subsequent implementation led to difficulties that might have been avoided.

The drive to develop a silicon transistor was seminal. TI did so through development of the equipment to pull crystals and by understanding the chemistry of germanium, silicon, the dopants and the packaging materials. Two years ahead of the rest of the industry, TI was able to produce silicon transistors whose temperature stability overcame many of the problems of germanium. And the company grew from $20M in annual revenue to $200M in a short period of time.

During the summer of 1958, Jack Kilby, who was a new employee and had no vacation, spent the time of the summer shutdown creating a phase shift oscillator on a single chip with transistors connected with gold wires bonding the discrete devices together on the same chip. Today it seems obvious that multiple transistors on a single chip would be valuable but it wasn’t obvious then. Old timers I met at TI told me that it had been obvious that you could put more than one transistor on the same piece of silicon. “But why would you want to do it?” they asked. “ You would never get both of them working at the same time”. Ridiculous today, as billions of transistors work in harmony to solve problems, but it wasn’t obvious then. It was to Jack, however. And the subsequent litigation over the invention of the integrated circuit led to one of the most significant patent lawsuits of history, creating a career for Roger Borovoy. Jack insisted in his testimony that the words “layed down” applied to deposited metal electrical connections. But TI hired a prestigious law firm that thought the patent suit would be a slam dunk for TI. It wasn’t. Roger, who was corporate counsel for Fairchild, prevailed with the view that the planar process was distinctly different from Jack’s approach to connecting the elements of the integrated circuit. Roger moved on to Intel and became well recognized as a corporate attorney. Jack had to accept the incompetence of the TI-chosen law firm and share the recognition with Robert Noyce (although Noyce’s premature death made Kilby the sole recipient of the Nobel Prize for the integrated circuit). Sharing the recognitions was not a totally negative outcome. While Jack’s words “layed down” may have included the planar process in his view, the compromise to recognize both men settled the West Coast/Dallas dispute and brought us all together, a result that Jack, as a gentle non-argumentative person, would have applauded.

I had the good fortune to meet regularly with Jack. He was a truly wonderful person, very modest and quiet. We both joined the advisory board of Formfactor at Bill Davidow’s request, and I had many delightful discussions with Jack, in addition to those I had early in my TI career.

The 20 Questions with Wally Rhines Series


Liberate Trio Embraces ML and Cloud

Liberate Trio Embraces ML and Cloud
by Alex Tan on 07-05-2018 at 12:00 pm

A chain is as strong as its weakest link. This phrase resonates well in Static Timing Analysis (STA) domain, though it is about accuracy rather than durability. As timing signoff step provides the final performance readings of a design, an STA outcome is as good as its underlying components. Aside from the parasitic extraction accuracy and a delay equation that should be consistent with the upstream place and route tool, design’s cell timing-model accuracy derived from the characterization runs is critical –and could be the weakest link if not handled properly.
Continue reading “Liberate Trio Embraces ML and Cloud”


CEO Interview: Cristian Amitroaie of AMIQ EDA

CEO Interview: Cristian Amitroaie of AMIQ EDA
by Bernard Murphy on 07-05-2018 at 7:00 am

AMIQ EDA has caught my attention over the last few months. My first impression was that this was just another small IDE company trying to compete with established and bundled IDEs from the big 3, a seemingly insurmountable barrier. This view was challenged by an impressive list of testimonials, not just from the little guys but also from designers in many of the bigger companies. In fact, hats off to them also for getting endorsements through this method, bypassing impossible-to-extract corporate endorsements. So it seemed worth learning more about this company. A discussion with Cristian, as the CEO of AMIQ EDA, was a good place to start.

Who is AMIQ EDA?
We’re an EDA company. We provide software tools to help design and verification engineers improve the speed and quality of new code development, simplify legacy code maintenance, accelerate language and methodology learning, and improve source code reliability. Our goal is bolstering the productivity of writing and debugging code while also increasing the chances that the code does what you intend. To use a phrase that I’ve seen before in EDA, we improve engineering efficiency and efficacy. That’s a fancy way of saying that we help design and verification engineers do their jobs.

How did the company start?
AMIQ EDA began in 2008, but we are actually a spinoff from our partner company AMIQ Consulting. They have been providing services and training in functional verification, verification planning and management, verification IP development, and related fields since 2003. In the course of their work they found they lacked certain tools that could make them more effective, so they began developing their own solutions in-house. We formed AMIQ EDA to bring these products to the general market and to develop new tools around similar objectives. This has worked out very well as judged by our customer’s response.

What keeps your customers up at night?
There are many design and verification languages and formats in use today, and it takes time to learn them and to become proficient. There are also numerous libraries and methodologies built on top of these languages, so there is a lot to absorb. We make it easier to learn all this technology, but we don’t stop there. Chip size and complexity continue to grow, and there’s no way that even expert users can manage their design and verification projects with simple text editors and Unix shell utilities. They find especially that it is challenging to keep many different side files (assertions, power intent, testbenches and others) aligned against signal and module names with the RTL as the design evolves.

How does an IDE help them?
For years, software engineers have relied on IDEs to help them write, format, and debug their code. We started development of an IDE for hardware design and verification engineers in 2005, so now we have a mature and full-featured solution: the Design and Verification Tools (DVT) Eclipse Integrated Development Environment (IDE). We support Verilog, SystemVerilog, Verilog AMS, VHDL, VMM, OVM, UVM, UPF, CPF, e, SDL, SLN, and more. We’ve moved into new domains such as power intent verification and portable stimulus specification while continuing to support existing languages and methodologies. DVT Eclipse IDE compiles your code and signals errors as you type, speeds up code writing using auto-complete and quick fix proposals, and finds anything you are looking for instantly. Our customers are very enthusiastic about how much time they save in aligning design and side files through these edit aids, checks and side-by-side views, so much so that they’re already suggesting more ways we can help.

Do you have other products?
Yes, we do. DVT Debugger allows you to perform debugging from the same place where you develop your code. You don’t need to keep switching between your editor and the simulator. DVT Debugger is integrated with all major simulators to bring run-time information into DVT Eclipse IDE. You might wonder why this is needed given the prominence of some commercial debug platforms but in fact a lot of users today work with mixed verification platforms leveraging popular waveform viewers from outside the big 3. DVT Debugger unifies debug in these environments.

If you’re doing verification with SystemVerilog, our Verissimo SystemVerilog Testbench Linter will enforce your specific group or corporate coding guidelines. This ensures consistency and best practices in code development. We build in deep knowledge of the Universal Verification Methodology (UVM) library, but we also support custom SystemVerilog environments. With hundreds of rules inspired by real-life projects, in the past three years Verissimo has been widely adopted as “the testbench linting standard.” Our customers are using Verissimo as part of their everyday code qualification process and they are constantly monitoring code quality using fully automated dashboards.

Finally, Specador is a tool that automatically generates accurate HTML documentation from your source code, even when the code is poorly documented. Most tools that generate documentation just extract and combine comments from your source code. All our tools, including Specador, compile and analyze source code to understand the organization and links within your environment. For example, Specador outputs cross-linked class inheritance trees, design hierarchies, and diagrams. Because the tool is language-aware, the resulting documentation is organized by language specific concepts, including both relationship and structural information.

How big is AMIQ EDA?
We accomplish a lot with a small team of about 15 R&D engineers. We are based in Bucharest, Romania and have sales and support channels around the world. Many people are unfamiliar with the high-tech environment in Romania, where tens of thousands of engineers work for thousands of companies. We have access to a great pool of talent!

Can you identify some of your customers?
Our solutions have been adopted worldwide by more than 100 companies in more than 30 countries. We provide many testimonials directly from our users on our Web site at https://www.dvteclipse.com/testimonials. You’ll see not only that they like our products but also they like our responsiveness. Their needs drive improvements in our products which, as a small company, we can turn around quickly. Proving that you don’t have to be a big company to have popular products!

Where can readers learn more?
Checkout out our website: http://www.amiq.com/

Also Read:

CEO Interview: Jason Oberg of Tortuga Logic

CEO Interview: YJ Su of Anaglobe

CEO Interview: Ramy Iskander of Intento Design


Morris Chang and Me

Morris Chang and Me
by Sunit Rikhi on 07-04-2018 at 12:00 pm

Legend has it that in 1984, Morris Chang was approached by a friend who was looking for money to buy equipment for manufacturing his electronic chip designs. Morris told him to do more homework. When his friend did not return, Morris reached out to him. His friend, it turned out, did not need the money after all. He had found another manufacturer willing to “rent” him equipment capacity at a fraction of the cost.

Morris was intrigued. Moore’s Law was two decades strong, delivering faster, cheaper, and cooler transistors every couple years. More and more chip designers were designing innovative products with these transistors, pushing up the demand for manufacturing. Like his friend, not all chip designers could afford their own manufacturing capacity. Was the world ready for semiconductor manufacturing services?

It took Morris 3 years to answer that question. By 1987, he had launched Taiwan Semiconductor Manufacturing Corporation (TSMC) – a company that manufactured chips for others, as a service. The company is known as a foundry because of the similarity of its business model with that of metal casting foundries in operation since early 19th centur

I never met Morris Chang. But, for the three decades that followed, I was his fellow traveler, a keen observer, a student, and a competitor.

Back in 1984, I was a 27 year old electronics engineer starting my career with Intel. Intel was (and still is) an Integrated Device Manufacturer (IDM). The IDM business model is the opposite of the foundry business model. An IDM develops manufacturing capability for its own products designed by its own IC designers. A foundry on the other hand, develops manufacturing capability for its customers’ products designed by its customers’ IC designers. A foundry helps its customers compete with IDMs.

Intel and TSMC grew up as leaders in the semiconductor industry they helped shape. Both drove exponentials: one in the electronics capability world-wide, and another in the reduction of cost for that capability. It resulted in fundamental changes in the way we live.

For most of this period, the industry generally accepted Intel as the leader at the edge of Moore’s Law, by at least a generation. I was one of the Intel voices shining light on Intel’s lead and explaining how that lead gives a competitive edge to Intel’s chips. Publicly, Morris did not indulge much in the technology leadership question, choosing instead to emphasize TSMC’s brand promise of customer service, trustworthiness and breadth of offerings.

In a 1998 interview, Morris said “The main thing that we’ve learned is that foundry is a service-oriented business, so we are molding ourselves into a service company”. These words were not from a business school slide. They came from deep and powerful insights of a master business man. They captured the pith of TSMC’s winning strategy. An important aspect of the service strategy was the harvesting of immense knowledge from the intimate teamwork between TSMC technologists and its customers’ IC designers. The willingness to learn from his customers was crucial in targeting and tuning his offerings to match his customers’ needs.

His emphasis on customer service did not mean TSMC was not focused on advancing with Moore’s Law. It was. I once described this pursuit as a group led by Intel, running towards an invisible wall, on an increasingly difficult terrain, and in a fog that was getting denser by the year. During this journey, Intel could hear the sounds of rival footsteps behind it, with many growing fainter over time. But not TSMC’s. It had been consistent, even getting louder, as it pulled up to Intel and started running shoulder to shoulder. Morris was clear about the importance of technology in making his customers competitive. In one interview he said “TSMC will stand behind our customers and cooperate with them. The battlefield between our customers and Intel is where we compete against Intel”.

The dawn of this century saw a change in client computing landscape. By then, the computer had spread from the desktop to the lap, but the move to the pocket was just starting. Intel assumed that Intel Architecture would sail into the pocket as easily as it did into the laptop. History however, proved that assumption wrong. The late Paul Otellini, Intel’s CEO at the time, considered that as one of his most significant failures. It was, in fact, Intel’s failure, not just his own. We at Intel felt entitled to success in markets where we were not incumbents. Our actions and inactions were rooted in that. But this was one of TSMC’s most spectacular successes, a result of years of customer-driven learning and delivering to commitments.

By 2008, Intel had launched a foundry division called Intel Custom Foundry (ICF), aiming to manufacture custom products for strategic customers. Intel was not the first to think of creating a foundry within an IDM. IBM offered foundry services long before that, and so did Samsung. However, due to Intel’s reputation as the leader in pursuit of Moore’s Law, even the most skeptical potential customers were intrigued, despite their concerns about incompatibility between the foundry and the IDM models. With ICF, Intel competed directly with TSMC. I led the formation and build up of ICF.

Soon, I came face to face with TSMC on the battlefield. In 2013, Altera Corp decided to switch from TSMC to Intel for their leading edge chips. Although Altera was not one of the highest revenue customers of TSMC, it was a strategic customer because it drove the leading edge of Moore’s Law. At the Q1 2013 TSMC earnings call, Morris was asked questions about the design loss of Altera. He said that he hates to lose even a part of an old customer. He said he regretted the loss and because of this, TSMC had investigated and thoroughly critiqued itself. He continued “..and there were, in fact, many reasons why it happened and we have taken them to heart. It’s a lesson to us and at least, we’ll try our very best not to let similar things happen again”. He clearly held himself accountable for the loss and resolved to do something about it. His humility was admirable and disarming. It kept me from gloating over my win.

Morris Chang was 55 when he started TSMC, and he walked away earlier this month, ending his glorious innings at age 86. This transformational giant of the semiconductor industry taught us through his goal clarity, personal humility, and tenacious stamina, that inspiration can hit at any age, and spectacular climbs to unimagined peaks can be undertaken anytime. Thank you, Morris.