NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

ClioSoft’s designHUB Debut Well Received

ClioSoft’s designHUB Debut Well Received
by Mitch Heins on 08-03-2017 at 12:00 pm

It was only back in May of this year that ClioSoft first introduced designHUB, a revolutionary new product that is meant to enable better use of intellectual property (IP) within a company. I wrote a SemiWiki article at the time of the announcement and mentioned it again in a lead-up article to the 54[SUP]th[/SUP] Design Automation Conference (DAC) held in June. The product made its debut at the 54[SUP]th[/SUP] DAC and it appears to have been well received.

I traversed the DAC floor each day and made a point of walking by the ClioSoft booth to see how they were doing and I must say it was busier than most. ClioSoft is an interesting company when it comes to the DAC floor as they work with almost everyone else that has electronic design automation (EDA) product offerings, making them a bit agnostic. The big-3 EDA players all have their followers and you will routinely see large numbers of people milling around their booths. For the smaller EDA players, it’s not so easy and many times you will see their booth traffic as being slow, sometimes with only their sales people standing around talking to each other. This didn’t seem to be the case for ClioSoft this year and one of the reasons for that is the new designHUB product.

ClioSoft gave regular presentations on designHUB throughout the show. They also presented on the new product at the Cadence and Chip Estimate booths each of which also gathered good sized gatherings of onlookers. So, what were they presenting?

The whole idea of designHUB is to enable companies to access and leverage what ClioSoft calls the ‘Untapped potential of the enterprise’.

Every design company has many IPs, documents, flows, scripts, and ideas that get used once and then put onto the shelf, some never to be used again. This IP represents literally millions of dollars of corporate investment that could and should be leveraged for the benefit of the company, but aren’t because the companies lack an easy way to mine and manage the information. The biggest potential comes from the minds of the companies’ employees, their ideas and their experiences that have been learned over the years. The process for capturing this IP is ad-hoc at best and for the most part non-existent. When the employee leaves, so usually does his experience and knowledge.

ClioSoft attributes the industry’s poor reuse record to several factors. Some of these factors include companies’ inability to share IPs across artificial silos such as different organizational structures and business units as well as having design teams geographically dispersed in different areas and time zones. Reuse of external IP is also encumbered by lack of easy ways to manage 3[SUP]rd[/SUP] party licensing information and 3[SUP]rd[/SUP] party IP usage within a company.

A more complex problem is that much of the value of design IP comes in the form of meta-data. Meta-data in this case is information about the design IP data that helps put things into context. This would include information like the assumptions that went into how the IP is to properly be used. Many times, this data is trapped in emails, meeting minutes, data management systems and issue tracking systems or as mentioned earlier, the minds of the engineers that worked on the IP.

ClioSoft’s designHUB addresses these problems by providing an IP reuse ecosystem that enables designers to create, share and reuse IP. The system acts as a company-wide repository that can be accessed for both design IP-data as well as the all-important IP meta-data that puts the IP into context. This includes information about from where the IP originated, the original purpose and function of the IP, assumptions about IP’s proper use, the IP’s known problems and resolutions, etc.; basically, any information that was captured as the IP was being created and used.

However, designHUB doesn’t stop there. It is not simply a data repository, but instead designHUB is the merger of traditional design with social media. It’s web-based platform provides a unified dashboard within the company that enables designers to communicate project status and query the company for IPs (both internal and external to the company), get technical details about the IP, ask questions about the IP, and query others within the company about its use.

Additionally, designers can use designHUB’s crowdsourcing capabilities to actively interact with other designers, share ideas and insights, discuss interests, do research, and ask questions and get solutions to problems. Once an IP is added to the system, the company can also use designHUB to track the IP’s usage, issues, defects and resolutions and provide feedback that could be useful for future users of the IP.

As mentioned, the beauty of designHUB is that it is design-tool and design-methodology agnostic. So, whether you are an avid user of only one of the big-3 EDA players’ tools or if you have a methodology that mixes and matches several tools from several EDA players, designHUB can help. With increasing design complexities and with the advent of fast moving market opportunities like those presented by the Internet of Things, the need for greater design efficiencies and reuse will be paramount to compete. Good design companies understand this phenomenon and perhaps that’s why ClioSoft’s booth at the 54[SUP]th[/SUP] DAC was so busy when others were not.

See also:
ClioSoft designHUB product page

Also Read

The Official SemiWiki #54DAC Party Guide!

Scaling Enterprise Potential with ClioSoft’s designHUB platform

Attending DAC in Austin for Free


Automotive System Reliability – ISO 26262 impacts IP and Tools

Automotive System Reliability – ISO 26262 impacts IP and Tools
by Tom Simon on 08-03-2017 at 7:00 am

If you have been following the topic of ISO 26262, you now realize that IP, or even EDA design tools, developed with the highest quality standards still can’t be ISO 26262 certified. Recently I had a conversation with Kurt Shuler from Arteris about this topic. He is VP of Marketing at Arteris, and he is also on several ISO 26262 technical committees. He pointed out that there is plenty that IP and tool providers can do to make it easier for the automotive systems developed with their products to achieve ISO26262 qualification.

Let’s dig a little deeper into how this works. Kurt first brought up how the context for IP and tools is important. Before highly programmable devices, it was easier to assess how a component would operate in a larger system. Now IP and tools are by definition 99.9% safety elements out of context. Because they can be used for just about anything, there is no way to determine how well they will perform without understanding the context in which they will be used.

Even the way that Automotive Safety Integrity Levels (ASIL’s) are defined drives the concept that it is how things are used that matters most. The controller chip for your seat position has markedly different qualification requirements than the same component used in an autopilot system.

So we know that ASIL levels are set for each identifiable hazardous event in a subsystem. By the way, a handy way to remember the severity of ASIL levels, is to think that C and D stand for “causes death.” ASIL A, for instance, is applied for the seat position system events. The ASIL level is the ‘product’ of the injury severity of the event, the exposure (or likelihood), and the controllability. Kurt cited the example of a cruise control going offline. In that instance the driver could take over. Or even an autopilot failure today can be recovered by the driver grabbing the wheel. Time frames are important. Each event has a recovery interval – the amount of time available for a driver or the system itself to recover.

Kurt talked about features that can be added to IP and software to make it easier for a hardware component to help achieve a higher qualification levels. In Arteris’ case, they offer products for on chip communication. They have built-in checkers that validate the integrity of the data communications. These regularly and periodically check the system, even when it is operating to make sure there are no faults. Kurt pointed out that the checkers themselves need to be checked. So, of course, there are features that check the checkers by intentionally inserting faults and ensuring they are handled.

For IP the crux of the issue is coverage. Kurt points out that the IP providers must make it as easy as possible for their customers to assess diagnostic coverage. What is known as Failure Mods, Effects and Diagnostic Analysis (FMEDA) becomes essential. It has to cover a myriad of issues, such as stuck bit, power transient, and almost anything else that you can imagine. For each of these the effect of the failure must be determined. It’s important that issues that occur at the lower level can be caught and then handled at the system level.

Because ISO 26262 anticipates sustaining engineering and traceability for up to 10 years, there will be many new requirements for design data preparation and preservation. In fact, entire tool chains must be available in order to recreate, analyze or modify designs over this period. ISO 26262 at the IP level is really about documentation, so that the system designers have the information necessary to qualify the system. Kurt mentioned that Arteris is offering a service to hold all configuration files for a customer design in a secure repository for the required 10 year period.

My conversation with Kurt was very informative. One of the benefits of having an IP provider that is so involved in the development of relevant standards is that the IP they develop is much more likely to facilitate implementation of the qualification process. Certainly, it is evident that the requirements of ISO 26262 are making their way through the entire semiconductor supply chain. When cars become fully autonomous, without any driver controls, it will be essential that complying with the reliability standards is considered at every level of system development. For more information about ISO 26262 and how Arteris is engaged in this effort, take a look at the Arteris website.


If you could ‘design’ your own child, would you?

If you could ‘design’ your own child, would you?
by Vivek Wadhwa on 08-02-2017 at 12:00 pm

Scientists in Portland, Ore., just succeeded in creating the first genetically modified human embryo in the United States, according to Technology Review. A team led by Shoukhrat Mitalipov of Oregon Health & Science University is reported to “have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.”

The U.S. team’s results follow two trials — one last year and one in April — by researchers in China who injected genetically modified cells into cancer patients. The research teams used CRISPR, a new gene-editing system derived from bacteria that enables scientists to edit the DNA of living organisms.

The era of human gene editing has begun…

In the short term, scientists are planning clinical trials to use CRISPR to edit human genes linked to cystic fibrosis and other fatal hereditary conditions. But supporters of synthetic biology talk up huge potential long-term benefits. We could, they claim, potentially edit genes and build new ones to eradicate all hereditary diseases. With genetic alterations, we might be able to withstand anthrax attacks or epidemics of pneumonic plague. We might revive extinct species such as the woolly mammoth. We might design plants that are far more nutritious, hardy and delicious than what we have now.

But developments in gene editing are also highlighting a desperate need for ethical and legal guidelines to regulate in vitro genetic editing — and raising concerns about a future in which the well-off could pay for CRISPR to perfect their offspring. We will soon be faced with very difficult decisions about when and how to use this breakthrough medical technology. For example, if your unborn child were going to have a debilitating disease that you could fix by taking a pill to edit their genome, would you take the pill? How about adding some bonus intelligence? Greater height or strength? Where would you draw the line?

CRISPR’s potential for misuse by changing inherited human traits has prompted some genetic researchers to call for a global moratorium on using the technique to modify human embryos. Such use is a criminal offense in 29 countries, and the United States bans the use of federal funds to modify embryos.

Still, CRISPR’s seductiveness is beginning to overtake the calls for caution.

In February, an advisory body from the National Academy of Sciences announced the academy’s support for using CRISPR to edit the genes of embryos to remove DNA sequences that doctors say cause serious heritable diseases. The recommendation came with significant caveats and suggested limiting the use of CRISPR to specific embryonic problems. That said, the recommendation is clearly an endorsement of CRISPR as a research tool that is likely to become a clinical treatment — a step from which there will be no turning back.

CRISPR’s combination of usability, low cost and power is both tantalizing and frightening, with the potential to someday enable anyone to edit a living creature on the cheap in their basements. So, although scientists might use CRISPR to eradicate malaria by making the mosquitoes that carry it infertile, bioterrorists could use it to create horrific pathogens that could kill tens of millions of people.

With the source code of life now so easy to hack, and biologists and the medical world ready to embrace its possibilities, how do we ensure the responsible use of CRISPR?

There’s a line that “A Prairie Home Companion” host Garrison Keillor uses when describing the fictional town of Lake Wobegon, where “all the children are above average.” Will we enter a time when those who can afford a better genome will live far longer, healthier lives than those who cannot? Should the U.S. government subsidize genetic improvements to ensure a level playing field when the rich have access to the best genetics that money can buy and the rest of society does not? And what if CRISPR introduces traits into the human germ line with unforeseen consequences — perhaps higher rates of cardiac arrest or schizophrenia?

Barriers to mass use of CRISPR are already falling. Dog breeders looking to improve breeds suffering from debilitating maladies are actively pursuing gene hacking. A former NASA fellow in synthetic biology now sells functional bacterial engineering CRISPR kits for $150 from his online store. It’s not hard to imagine a future in which the big drugstore chains carry CRISPR kits for home testing and genetic engineering.

The release of genetically modified organisms into the wild in the past few years has raised considerable ethical and scientific questions. The potential consequences of releasing genetically crippled mosquitoes in the southern United States to reduce transmission of tropical viruses, for instance, drew a firestorm of concern over the effects on humans and the environment.

So, while the prospect of altering the genes of people — modern-day eugenics — has caused a schism in the science community, research with precisely that aim is happening all over the world.

We have arrived at a Rubicon. Humans are on the verge of finally being able to modify their own evolution. The question is whether they can use this newfound superpower in a responsible way that will benefit the planet and its people. And a decision so momentous cannot be left to the doctors, the experts or the bureaucrats.

Failing to figure out how to ensure that everyone will benefit from this breakthrough risks the creation of a genetic underclass who must struggle to compete with the genetically modified offspring of the rich. And failing to monitor and contain how we use it may spell global catastrophe. It’s up to us collectively to get this right.

For more, please read my book,The Driver in the Driverless Car, which this article is based on. I know it will inspire you to learn more about advancing technologies and what they make possible, good and bad.


Cloud-Based Emulation

Cloud-Based Emulation
by Bernard Murphy on 08-02-2017 at 7:00 am

At the risk of attracting contempt from terminology purists, I think most of us would agree that emulation is a great way to prototype a hardware design before you commit to building, especially when you need to test system software together with that prototype. But setting up your own emulation resource isn’t for everyone. The big systems come with eye-watering costs and are primarily targeted to ASIC design. If your objective is FPGA-based design or small ASIC designs, emulation solutions can be less expensive but if you have a limited budget or if you need flexible access to handle peak verification loads, capital and maintenance costs can still be a significant concern.

That’s where cloud-based emulation becomes an interesting alternative, in this case with Aldec. You don’t have to worry about the hardware or setup or maintenance because you can now access their hardware emulation system (HES) through Amazon’s rather well-known cloud services (AWS). After you have got your access credentials, you transfer design files to Amazon’s Simple Storage Service (S3), then in Amazon’s Elastic Compute Cloud (EC2) you use an Amazon Machine Instance (AMI), configured by Aldec, to do design setup and synchronize files to the HES server. Then you start testbench simulation running on the AMI instance synced with emulation running on the HES server.

I wasn’t quite sure how exactly this works until I checked out Krzysztof Szczur’s blog (he’s the verification products manager at Aldec). The HES server in this cloud configuration sits at Aldec’s facility and communicates with the cloud-based AMI instances through a secure VPN. You as a designer communicate with the AMI through secure SSH. You might wonder if security in this configuration is enough to protect your design crown jewels. Take heart – if some of big EDA’s big customers can feel comfortable working in the cloud (sorry, I don’t have names), you probably can too.

And I’m guessing other cloud-based emulations work in the same way. After all, neither Amazon nor other emulation vendors are likely to want to deal with the cost and maintenance hassle of emulation boxes sitting in an Amazon datacenter (and come to that – which datacenter?). The simplest way to manage this is to secure VPN back to the emulation supplier who can more effectively manage and maintain that emulation hub.


Aldec’s platform in this case is built on a Linux CentOS 7 based AMI, preconfigured for HES, supporting S3 storage and running in an EC2 instance supporting 4 CPUs and 16GB of RAM. In the AMI instance, they have their HES-DVM software providing automated synthesis, partitioning and mapping onto the target FPGAs on the HES board. Also included are Riviera-PRO, their HDL simulator and co-emulation support / simulation acceleration, support for SystemC and TLM libraries, UVM libraries and AXI transactors and monitors. I’m not clear if they also include QEMU support for virtual prototyping but I see no reason why this shouldn’t be possible.

On the emulation end of the solution they provide access to their HES7XV1380BP board (supporting up to 8M ASIC Gates). Since this is in their facility, again I see no reason why they wouldn’t over time expand support to more of their board options.

This concept of cloud-based access to EDA applications is taking off. Cadence recently touted access to implementation solutions in the cloud, as well as access to emulation in the cloud. It’s been a long time coming, gated at least as much by design house concerns over security as finding a workable business model to meet both EDA vendor and customer needs. Especially in these cases where specialized hardware must be part of the solution, elastic compute is needed to handle peak demand and affordability is essential to enable the current explosion of systems innovation, solutions of this type can only become more popular. You can learn more about Aldec’s HES solution HERE.


AI ASICs Exposed!

AI ASICs Exposed!
by Daniel Nenni on 08-01-2017 at 12:00 pm

Artificial intelligence, or AI is really heating up these days. The technology has been around for decades, but of late it is becoming quite a focus for applications such as data center analytics, autonomous vehicles and augmented reality. Why the rebirth? The trend appears to be driven by two forces – availability of data to train these systems and new technology that dramatically speeds up the training process. Let’s take a look at both of these trends.

Regarding the data, this is really the currency of AI. Without massive amounts of known results, inference and machine learning isn’t possible. Thanks to the huge, global footprint and ubiquitous nature of a few key players, data stores are being built every day. Google has amassed a huge amount of empirical data associated with autonomous vehicle behavior. So has Tesla, Detroit and every other car manufacturer for that matter. Audi appears to pulling ahead with the planned introduction of Level 3 capability (“no feet, hands or eyes”) for their flagship A8. Look here to learn more about this development.

Natural language processing is another frontier. Think about all the gadgets in your house that are listening to you and ready to interact (e.g., Amazon Alexa, Samsung TVs and the like). This is not really a plot to eavesdrop on you. It does look like a carefully engineered program to learn how to interpret human speech however. There are many more examples of massive data collection from the likes of Google, Facebook, Amazon and Microsoft.

Looking at AI from the technology perspective, Nvidia’s GPU technology took an early lead as an architecture that could be adapted from graphics acceleration to AI training. The field is now expanding from this training phase and new architectures that can execute AI systems at much faster speeds are being developed. Companies like Nvidia, Qualcomm, Intel, IBM, Google and Facebook as well as others are jumping in.

These devices aren’t really chips, but rather systems in a package. They typically contain a massive processing ASIC (or two) built in the latest semiconductor technology (think 16nm and below) along with massive amounts of ultra-high bandwidth memory (think HBM2 stacks) all integrated on some kind of interposer (think silicon). We know who needs these chips, but who is designing and building them?

From a foundry perspective, this is the big leagues. TSMC, Samsung and GLOBALFOUNDRIES are players. Not a long list – this is hard stuff. These are ASICs, so who is sourcing the design? You need to look at who is really good at 2.5D integration and who owns the critical enabling IP for these designs (think the HBM2 physical interface and high-speed SerDes). The HBM2 PHY and high-speed SerDes blocks implement the mission critical communications between the various parts of these systems. They both represent very demanding analog-ish design challenges and sourcing them from the ASIC vendor is a very good idea to keep risks to a minimum.

The list of ASIC suppliers that possess all these pieces isn’t very long. Since this market will likely see explosive growth, this is a good list to be on. There’s one ASIC vendor in particular that will be interesting to watch – eSilicon. Regarding the required technology trifecta, they’ve been doing 2.5D integration since 2011 and are regarded as a leader in this space (check). They’ve also introduced a silicon-proven HBM2 PHY and SemiWiki covered this in a recent post (check). But what about the SerDes? Up to now, eSilicon has been integrating third-party SerDes blocks. If you look closely, this may be changing however. The company has made no formal announcements about ownership of SerDes technology, but you can find mention of a High Performance SerDes Development Center on their website. And they’re hiring layout engineers as well which is a big tell.

Bottom line: I’d keep an eye on eSilicon – the short list of ASIC players for the AI market is about to get a little longer, absolutely.


ARM and Cadence IP Simplify IoT System Design and Verification

ARM and Cadence IP Simplify IoT System Design and Verification
by Mitch Heins on 08-01-2017 at 7:00 am

As the Internet-of-Things (IoT) markets mature, we are seeing the complexity of IoT systems evolve from simple routing functions that connect IoT edge devices to the cloud into more complex system of systems that manage the interaction between multiple sensor-hubs. IoT sensor-hubs and gateways not only take care of basic care and feeding of sensors, but they are also now evolving to do more data fusion, analysis and analytics local to the edge devices. Doing this can decrease response time of the overall system while also reducing the amount of raw data that must be sent into the cloud.

To do this type of work, IoT SoCs are becoming quite complex and include heterogeneous architectures containing multiple cores and GPUs, each of which have specialized capabilities to turn sensor data into information that can be used by cloud-based applications. These systems must manage shared local memory between processors and they must deal with multiple communications standards and protocols to talk to both the sensors and the cloud servers.

Designing and verifying these types of SoCs from scratch could be a daunting task but I was pleasantly surprised by a demonstration at this year’s Design Automation Conference (DAC) that showed how far the industry has come to help designers tackle the problem. The demonstration was given by ARM and Cadence Design Systems, two companies who have partnered for several years to make SoC design easier. In fact, the Cadence-Heart-ARM graphic came from a SemiWiki article written by Paul McClellan in 2014 where Cadence and ARM were already working together on this topic.

The point of the DAC2017 demonstration was to show designers how quickly and easily they could customize and verify a new IoT design using predefined platform IP from ARM and Cadence. Designers were presented with ARM’s CoreLink SDK-200 System Design Kit which is a configurable platform comprised of one or more ARM Cortex-M33 cores, system memory, Cordio Radio IP, ARM Trustzone CryptoCell IP, AHB5 interconnect logic and any number of standard communications protocols and interfaces to communicate with different sensors and actuators.

The demonstration started with several peripherals already pre-assembled into a system (5 from Cadence IP and one from ARM’s IP catalogue). Designer attending the demonstration were tasked with adding a GPIO interface to the system and were timed to see how long it would take them to do the task.

The idea was to get people to try the flow and see how long it would take them to add and configure a GPIO interface into the system and then verify they did it correctly using the Cadence Verification Workbench tool suite. ARM and Cadence walked me through this and I was amazed to see that with the use of ARM’s Socrates development environment, a standard GPIO interface could be easily added to the IoT platform with literally only a couple minutes configuration work done by the designer. Socrates takes care of the rest by automatically generating all the necessary RTL code to make the connections to the ARM AHB5 interconnect fabric.

The Cadence guys then jumped in and showed me how they can take the output of ARM’s Socrates software and feed it into their Interconnect Workbench to automatically generate a UVM test bench for the entire system complete with Verification IP (VIP) for not only the ARM IoT platform (and all its constituent pieces) but also the GPIO block that had just been added. The generated test bench included test features needed to ensure the system was correctly connected. Within 5 minutes the full test bench was up and running enabling the designer to be able to start running both detailed functional tests as well as a suite of regression tests that could be used anytime new changes were added to the system.

From scratch, this would have taken a large team of people multiple months to pull together the system and all the test benches required to test it. While I didn’t run the actual tools, there were several people at DAC who did go through the full exercise at both the ARM and Cadence booths and from what I was told, some of the best times to complete the exercise were in the sub 5-minute regime. I’ve had days that I can’t even log into my system in that amount of time. Obviously this was a demo, yet it spoke volumes to the power of having well architected and tested predefined platforms from which to start your design customization. I was certainly impressed.

ARM and Cadence were quick to point out that the IoT design kit is only one of many different end-application platforms on which they have jointly worked and that the platform-based methodology they are espousing is generally applicable across many different types of SoC designs. And as noted earlier, this solution didn’t just pop out of thin air. It is the fruit of many years labor by both ARM and Cadence working collaboratively to create the necessary software infrastructure that allows designers to functionally combine IP blocks at a higher level of abstraction and then automatically configure the logic and test benches to ensure a correct-by-construction implementation. This is a big leap forward for designer productivity as it allows designers to focus on customizing and verifying functionality that meets their end application needs as opposed to spending weeks or months manually making interface connections and writing test benches that may or may not verify their work.

With the advent of the IoT market and the evolving nature of the SoCs needed to implement IoT systems, it looks like ARM and Cadence couldn’t have fielded their joint solution at a better time. This kind of innovative cooperation is what separates true solutions providers from simple IP and tool providers.

See also:
ARM IoT Solutions
Cadence SoC Interconnect Verification Solution


Samsung Sloppy Sailor Spending Spree!

Samsung Sloppy Sailor Spending Spree!
by Robert Maire on 07-31-2017 at 12:00 pm

Last week, TEL (which is the Japanese equivalent to AMAT & LRCX) reported a June quarter which saw revenues drop to 236B Yen from March’s 261B Yen and saw earnings drop from March’s 47B Yen to June’s 41B Yen, a respective 9.3% decrease and a 12.8% decrease in earnings.

We don’t think this is attributable to share loss and actually think there is little to no share loss in the numbers. There may be customer mix issues but the mix reported seems very similar to that just reported by Lam.

Meanwhile Samsung had fantastic results of semiconductor sales of 17.6T KW ( $1B=1.1T KW) up 15% Q/Q. Operating margin was up to 46% from the previous quarters 40%. Memory demand remains strong and pricing remains firm so Samsung is obviously confident enough to plow more money into capex for chip production.

Total capex was 12.7T KW and 7.5T was semis and 4.5T for displays. By comparison Q1 was 9.8T of which 5T was semis and 4.2 displays. This represents a whopping 30% sequential increase in capex Q/Q. This also means that H1 capex was 22.5T KW versus ALL of 2016 at 25.5T KW. Obviously 80% of spend goes to memory but logic is still getting support and may get more in the future given Samsung’s recent comment about wanting to triple its foundry business to compete with TSMC.

In effect we have “a tale of two cities”- TEL is already reporting a slowdown in capex spend but Samsung looks like it has no reason to slow down capex spend. We are clearly seeing a slowing of DRAM spend which we think is a good thing given the volatile nature of that segment. Obviously customer mix is going to change a bit so we may see differing effects on different equipment makers depending upon who their biggest customers are.

The slowdown at TEL does add another data point of a slow down or at least near term peak in the market.

TEL Top Tick???
The 9% drop in revenues and 13% drop in earnings should not be ignored given TEL’s stature in the business. There is strong correlation to what we heard from Lam last night as TEL saw its NAND business increase from 27% of sales to 40% while DRAM dropped from 22% to 17%. Even more surprising was foundry dropping from 27% of business to 16% while logic (mainly Intel) was flattish at 27%. In summary, DRAM and foundry declined in Taiwan (obviously TSMC) while NAND continued to rocket in Korea (obviously Samsung).

TEL said it will increase share in dep, etch and clean in 2017 (AMAT has made similar claims and Lam has been more silent so we assume some share loss). TEL is doubling etch tool capacity by 2019. TEL’s view is for NAND market to be up 30% Y/Y , DRAM up 5%-10% and foundry/logic flattish. They feel that 2017 WFE capex will be up 10% primarily driven by 3D NAND and logic.

To early to tell- Pause, peak or plateau…
Its unclear how long the dip will last, whether its one quarter as suggested by Lam or potentially longer as TEL already reported a drop in June numbers. TELs 10% drop already reported seems larger than what Lam is implying for September shipments. Our best guess is that Lam is faring better due to higher exposure to Samsung whose spending will offset other declines.

Further triangulation from KLAC…

We will get our third data point from KLAC but as before we would err on the side of caution and not get too far ahead in our expectations. While TEL, AMAT and LRCX are in similar boats, KLAC and ASML tend to be earlier in the cycle with longer lead times and thus do not correlate as well as LRCX, AMAT and TEL do. We would suggest that TEL’s results are likely most predictive of AMAT’s performance. Given AMAT’s exposure to TSMC we might expect a similar issue in foundry.

AMAT while well exposed to Samsung is not as highly correlated to Samsung as Lam. We do think there is some share loss in low end etch from Lam to AMAT with TEL’s share being a bit more stable. We don’t see share loss from KLAC to AMAT in a significant way as PDC has been less effective than AMAT’s etch.

Samsung spending like there’s no tomorrow…
(maybe there is no tomorrow if Kim Jun Un launches)

Samsung is on a huge roll. The semiconductor division is going great and memory remains huuuuge (obviously there is a very good derivative call on the shares of Micron here…). We think 3D NAND has a long run but DRAM may slow and we may be seeing some potential signs of it already in both Lam and TEL. Samsung also knows where its bread is buttered as consumer electronics was 18% of sales but only 2% of profits. So its clear that Samsung will continue to feed the semiconductor business which is the clear earner. This is obviously not lost on Samsung’s long term plans as it is clearly envious of TSMC and the great job they have done in foundry. We think its going to be very, very hard for Samsung to make progress in foundry but we think they will be very dominant in memory for a long time as they currently already are and may become even more dominant if Toshiba pauses here.

The stocks…
We think we were correct about being light the shares of LRCX going into their quarterly report. We would take the same approach with KLAC as we think the stock has had a great run and could be a bit “toppy” with “twitchy” investors.

The TEL news makes us even more nervous and cautious than before. While fundamentals remain solid there is clearly some near term uncertainty as spending between chip makers varies it remains hard for equipment companies to make up for slowing spending coming out of TSMC. The best scenario would be for Intel to spend more to make up for TSMC’s slowing but we are not that sure of that being the case.


Synopsys Opens up on Emulation

Synopsys Opens up on Emulation
by Bernard Murphy on 07-31-2017 at 7:00 am

Synopsys hosted a lunch panel on Tuesday of DAC this year, in which verification leaders from Intel, Qualcomm, Wave Computing, NXP and AMD talked about how they are using Synopsys verification technologies. Panelists covered multiple domains but the big takeaway for me was their full-throated endorsement of the ZeBu emulation solution. Synopsys historically has been a little shy about sharing their accomplishments in this domain. They weren’t shy in this event.


Michael Sanie, VP of marketing for the Verification Group, gave a quick overview of recent accomplishments in verification, first by noting that Synopsys has the fastest growing emulation business in the industry. Of course it’s easier to grow fastest when you start small, but the growth rate is notable. And for those of us who thought the emulation biz for Synopsys was mostly just Intel, the following panelists made it apparent that its growth is broad as well as deep.

Chris Tice, Synopsys VP for verification continuum solutions (previously VP/GM for hardware verification at Cadence) followed, talking about the petacycles challenge in bridging between growing software cycles and hardware, while also dealing with accelerating expectations in integrity, reliability and safety – and of course power for mobile applications. He stressed fast emulation will be central to satisfying these needs, and that the fast refresh cycle we’re seeing in FPGAs is central to delivering solutions.

In an otherwise emulation-centric discussion, Iredomala Olopade from Intel talked about making static verification more effective for huge designs. He offered what ought to be a widely-shared piece of wisdom – to get a truly high signal to noise level in static analysis you need both a design-centric view and tool expertise. He cited as an example a CDC analysis in which 75k paths were reported, of which only 100 were real (sound familiar?). Applying these principles and working with Synopsys, Intel reduced false violations to only 1% of total violations. Pushing even harder, they got down to even less false violations and, interestingly, found 30 new and real violations.

Sanjay Gupta, engineering director at Qualcomm, leading the verification team, brought the discussion back to emulation. Qualcomm of course has huge designs and is dominant in mobile so they must deal with increasing functionality, very complex low power strategies and cycle times of 6 months to a year for each SoC. In an earlier design, one missed power error led to a DOA chip making them paranoid about power verification, to the point that they now will only do power signoff in gate-level simulation (because only then do you model the real PG connectivity). He acknowledged that this is expensive but working with Synopsys they have been able to deploy an end-to-end solution for power-aware regressions using ZeBu, with improved runtime and performance.

Jon McCallam, global emulation architect lead at NXP followed with their use of ZeBu in verification for automotive applications. He stressed the importance of having emulation mirroring simulation as closely as possible especially when dealing with limited model accuracy for some IP (analog and other non-synthesizable IP). He also stressed the importance of quick turnaround time for classifying hardware versus software problems and fixing bugs quickly, especially in ECO turns for mask sets. I noticed that NXP depends heavily on the unified flow, from simulation, to ZeBu, to HAPS, to Verdi for verification, and in driver development. In fact, they were able to prove out/develop firmware, drivers, Linux clock and power architecture and more before first silicon, and bring up Linux within 8 days of receiving first silicon.

Alex Starr, a senior fellow at AMD, echoed this “get your software ready pre-silicon” theme. He showed a progression of slides in which they use their VirtualBox virtual platform to model a complete system including the SoC and software from OS up to applications. The SoC is modeled through the SimNow platform, starting with a virtual platform model, which supports getting the main software concepts sorted out. Within SimNow they can then switch the whole design to ZeBu. Or they can switch to modeling just the graphics IP running in ZeBu with the rest of the SoC running on the virtual platform, communicating with the hardware model through transactors. For really detailed hardware diagnosis, they can drop pieces of the model down to VCS-FGP. And when they get silicon, they plug that in, also under SimNow. Throughout all of this, changes in the SoC model are transparent to software developers/validators and the software stack remains the same from the SoC point of view. AMD have been developing these platforms for a while, precisely to push this aggressive shift left (plus ability to drill down for debug) – to get to the point that software comes up at the same time as hardware, where they converge together when silicon is ready.

For the stats-geeks, Alex wrapped up by saying that they can now get to under 1 day compile (on ZeBu) for 1.6B gates and with tuning they are finding they are approaching the lower end of FPGA prototyping performance, an important endorsement in validating that the Verification Continuum can span use-needs from simulation accuracy to prototype speed.

We’ve had two majors dominating news in this space for a while. Another viewpoint can only make the debate more interesting 😎. You can watch the panel discussion HERE.


SEMICON West – Advanced Interconnect Challenges

SEMICON West – Advanced Interconnect Challenges
by Scotten Jones on 07-28-2017 at 12:00 pm

At SEMICON West I attended the imec technology forum where Zsolt Tokei presented “How to Solve the BEOL RC Dilemma” and the SEMICON Economics of Density Scaling session where Larry Clevenger of IBM presented “Interconnect Scaling Strategic for Advanced Semiconductor Nodes”. I also had the opportunity to meet with Tanaka Precious Materials and hear about their new Ruthenium precursor material. In this blog I will discuss back end of line (BEOL) scaling challenges and some possible solutions.

Zsolt Tokei – imec

The talk began with a quote from “Recent Advances in System-Level Interconnect Prediction” that “Interconnect are the Limiting Factor for Both Performance and Density”.

As interconnect pitches are shrinking both capacitance and resistance are increasing driving up the resistance-capacitance (RC) delay. The length of critical metals lines is related to contacted poly pitch (CPP) and CPP scaling has slowed putting more pressure on minimum metal pitch (MMP) to provide scaling. Slower CPP scaling makes the interconnect lines longer and more aggressive MMP scaling makes the lines narrower and both increase resistance.

Options to address RC include: geometry, the 3rd dimension and materials.

Geometric concepts include design rule optimization and the super via concept where instead of limiting vias to connecting metal level n to layer n+1, a super via connects metal n to metal n+2. For example, a standard via connects metal 1 to metal 2 and a super via connects from metal 1 to metal 3. By providing an additional option super vias can improve area and performance.

3D options include chip stacking with through silicon vias (TSV), sequential 3D where you form multiple layers of devices on one chip or integrating thin film transistors (TFT) into the back end. 3D with TSVs have cost and integration challenges, sequential 3D has integration and density challenges and TFT in the BEOL have performance and maturity challenges.

Copper has been the material of choice since the 130nm logic node due to its low resistivity. As dimensions scale down copper presents multiple challenges. First is that copper requires barrier and adhesion layers. The barrier and adhesion layers are made of relative high resistivity materials and the required thicknesses don’t scale well. At very small dimensions the percentage of the cross-sectional area of an interconnect that is copper is becoming smaller and the percentage that is barrier/adhesion layers is getting larger increasing the resistivity of the interconnect. The second problem with copper is that at very small dimensions the resistivity of copper increases due to scattering.

There is a lot of work being done on alternative materials that require less or no barrier/adhesion layer. Even if the new materials are higher resistivity by eliminating the barrier layer/adhesion layers a lower cross-sectional resistance may result. At the 10nm/7nm node we expect to begin to see cobalt filled contacts and local interconnect. Around the 5nm node we may begin to see ruthenium interconnects with no barrier/adhesion layer and despite ruthenium’s high resistivity it can outperform copper with a 2nm barrier at small dimensions. See figure 1.


Figure 1. Alternative metal examples. Source imec.


Larry Clevenger – IBM

Similar to the imec talk, this talk began with a statement of the problem, see figure 2.


Figure 2. RC scaling.

EUV can provide some relief in BEOL scaling, in addition to process advantages such as better resolution, single patterning and fewer masks, EUV can provide fewer ground rule restrictions, bi-directional versus uni-directional wiring and flexible wire widths.


Figure 3 illustrates the impact of EUV on design rules.

Figure 3. EUV versus 193i impact on design rules.

Other options for improving BEOL performance include

  • Getting signals into fat wire quickly trading off area for performance.
  • Use multiple redundant vias where resistance is important.
  • Reducing liner thickness with 1 to 2nm needed for 5nm.
  • When resistance is more important than capacitance use asymmetric line widths where the metal line is wider than the gap between lines.
  • Remove the barrier layer in the bottom of vias can reduce via resistance by 30%.
  • Air gaps can reduce capacitance by 20%.

As noted in the imec talk, cobalt and ruthenium are potential replacements for copper at smaller linewidths.

And finally heterogeneous integration with interposers can improve latency, bandwidth, cost and reliability.

Tanaka meeting
I also met with Tanaka Precious Metals at Semicon. Tanaka is developing a Ruthenium precursor for ALD and CVD deposition of Ruthenium. As both the imec and IBM presentations discussed Ruthenium is being investigated as a copper replacement around 5nm. Ruthenium has a higher bulk resistivity than copper but can be used without a barrier layers and doesn’t suffer from increasing resistivity as line widths shrink.

Tanaka Precious Metal is a 131-year-old $8.8 billion dollar privately held precious metal company. They are already a leader in precious metal sputter targets and gold bonding wire for the semiconductor industry.

Materials like Ruthenium can be deposited by physical vapor deposition (PVD) such as sputtering or deposited by chemical vapor deposition (CVD) or atomic layer deposition (ALD). Typical step-coverage for the three techniques is ALD > CVD > PVD.

The precursor Tanaka has developed is a liquid with a vapor pressure of around 0.2 torr at 100[SUP]o[/SUP]C, similar to other precursors. Deposition temperatures are 200[SUP]o[/SUP]C with a deposition rate of 2nm/min. Tanaka is doing a lot of work to improve deposition rates. They have shown CVD/ALD process with 5nm and 6nm barriers. The product is available for evaluation now.

Conclusion

Continued scaling will require careful optimization of the BEOL to minimize the negative impact on performance. New technique such as super vias, TFT in the BEOL, EUV and new materials such as ruthenium will all be needed to continue scaling.


Tensilica HiFi 3z DSP Core: Leading Energy Efficiency

Tensilica HiFi 3z DSP Core: Leading Energy Efficiency
by Eric Esteve on 07-28-2017 at 7:00 am

Tensilica HiFi DSP family, dedicated to voice and audio processing, is shipping over 1 billion units worldwide annually, thanks to the 75+ licensees. The new HiFi 3z architecture offers more than 1.3X better voice and audio processing performance than its predecessor, the HiFi 3 DSP, which leads the industry in the number of audio DSP cores shipped. Better voice quality requires higher voice sample rates, translating into more complex voice pre-processing. For example, the latest mobile voice codec supporting voice over LTE (VoLTE), Enhanced Voice Services (EVS), supports up to a 48kHz sample rate, compared to 16kHz for the previous AMR-WB codec. The need to increase DSP workload is similar for home entertainment applications, based on audio codecs like Dolby AC-4 and MPEG-H transitioning from channel-based to object-based.

Which is true for pre-processing also applies for audio post-processing functions. For example, to support the Waves Nx 3D/AR audio and the immersive audio of Dolby Atmos-enabled TVs, the HiFi 3z DSP provides 1.4X better performance on Dolby Atmos-enabled TVs than the HiFi 3 DSP.

The HiFi 3z architecture has been improved compared with the HiFi 3, and we can highlight the most important points. The MACs count for 16×16 has been doubled, leading to an octal MAC. The HiFi 3 DSP has Load/Store support in 1 slot only when the HiFi 3z provide L/S support in 2 slots. The HiFi 3z offers a number of instruction set architecture (ISA) improvements to efficiently supports the latest audio and voice compute requirements, by accelerating FFTs, FIRs and IIRs. To improve voice trigger performance, the HiFi 3z provide 4-way 8-bit load, and 8-way 8-bit load for reduced NN memory usage. The support for multiple instruction-length encoding allows code-size reduction.

The availability of Floating Point Unit in the DSP dramatically reduce time from algorithm development to DSP implementation and the HiFi 3z offers a FPU as a configurable option. Benefiting from this FPU can boost the Time-to-Market, crucial for applications targeting consumer oriented systems. This FPU can execute up to 2 floating-point MACs per cycle and IEEE 754 floating-point operations are available. Because the FPU operates on vector register file, this result to a reduced area and allow a seamless conversion between floating point and fixed point. The NatureDSP FPU DSP library is available and supported by the HiFi 3z.

The number of applications where you need a powerful but energy efficient, low footprint DSP IP to support audio and voice is constantly growing. In the mobile segment, you need audio and voice processing in smartphones, tablets or laptops. In the consumer segment, home entertainment includes Digital TV, Set Top Box, sound bar or gaming, requiring functions like:

  • Audio codecs such as Dolby, DTS, MPEG-H
  • Audio post-processing
  • Immersive audio
  • Interactive audio and voice codecs for real-time gaming

But you also need an audio DSP in the automotive segment to support digital radio and head unit infotainment, relying on high performance:

  • Audio codecs such as Dolby or DTS
  • Audio post-processing, active noise control or in-cabin communications

We also see the emergence of smart speakers, using front-end processing, voice trigger and neural network processing to support voice recognition. Who could guess, a while ago, that audio DSP will have to support IA functionalities? That’s why the Tensilica HiFi 3z DSP 16×16 MAC engines provide 4.8 GMAC/sec capability (@ 600 MHz clock rate) to DNN_ASR systems.

How to position HiFi 3z DSP IP core across Tensilica various families? It’s simply the leading energy efficiency device for complex audio/voice processing, offering higher performance than the HiFi 3 (and the Fusion F1, Ultra-low energy, the HiFi 2, mainstream and the ultra-low power HiFi Mini). Only the HiFi 4 will provide higher performance, the best 32-bit fixed and floating-point device of the HiFi family!

The following link to get more information about the HiFi 3z DSP:

https://ip.cadence.com/ipportfolio/tensilica-ip/audio&CMP=TIP_BLG_SW_HiFi3z_0717_3z_PP

By Eric Esteve fromIPnest