webinar banner2025 (1)

Tensilica HiFi DSPs for What I Want to Hear, and What I Don’t Want to Hear

Tensilica HiFi DSPs for What I Want to Hear, and What I Don’t Want to Hear
by Randy Smith on 08-16-2019 at 10:00 am

It seems every day we see a new article (or ten) on autonomous driving. It is an especially hot topic, and it will happen someday. For now, we can dream about it, and many people are working on it. But for the present, the technology in a car that commands my attention is audio. I’ve been a musician since 4th grade. I still perform occasionally today. I love all types of music. And the one time I can listen to whatever I want is when I am alone in the car. So, when I attended Cadence’s Automotive Design Summit the end of July, the session titled, HiFi DSPs for Automotive Infotainment, had my full attention. Larry Przywara, Cadence’s Product Line Group Director Audio/Voice IP, Tensilica Products, who works in the Cadence IP Group, gave the presentation. So, thank you, Larry, you made my day!

If you live in Silicon Valley, you know we are in our cars a lot. I try to stay off the cell phone, but I need to stay connected. I use CarPlay for that. A feature we didn’t even dream about ten years ago. While I was happy when higher fidelity sound first started appearing in cars (e.g., DTS, DVD-A, etc.), many cars no longer support physical media at all. My phone has over 16GB of music on it, why should I need to fumble with a disc? So, today’s infotainment system needs to support many audio sources including Bluetooth, AM, FM, Digital Radio (HD Radio, DAB, DAB+, DRM…), CarPlay, Android Auto, Sirius XM, MP3 from hard disk, and the list keeps growing.  Most importantly, Cadence HiFi licensees, of which there are well over one hundred, can get support for any the digital terrestrial and satellite radio audio standards because they are all supported on the HiFi Audio DSP.

The presentation also pointed out the new advanced audio/voice features that will be coming to our cars very soon can all be supported on Tensilica HiFi DSPs. These new features include advanced noise cancellation to eliminate road and engine noise, improved speech recognition, audible directional warnings to let us hear the direction we need to be concerned about and even improved in-cabin communications. In-cabin communications? What’s this? Well, today’s SUV drivers need to shout to be heard in the back row – but what if the car could pipe that voice back to the back row for you instead, no shouting needed? Or, how do you like this feature, “sound bubbles”? The adult driver relaxes to soft jazz, while the kids are watching a movie in the back, and the front row passenger is listening to their favorite podcast – and none of them hear the other’s content. Wow, that sounds nice! The power of Tensilica’s DSP technology will be doing all of it.

If you want a real-life example for the over 100 HiFi licensees, look no further than the Samsung Exynos Auto V9 automotive processor. While the chip does employ Arm processors for some of the infotainment features, the audio portion is four Tensilica HiFi 4 DSPs, as seen here. DSP architectures are more suitable for presenting audio and for speech recognition. The advantage of using DSPs is due in part to the low latency characteristics of DSPs, and the high performance of Tensilica DSPs in particular.

Finally, as with any DSP core, you need to have the proper software available as well. Indeed, Cadence’s list of 3rd-party Automotive Partners is impressive and complete. No matter which audio features the car manufacturer, or infotainment OEM wants to provide – a complete HiFi solution will be available.

“Sorry Honey, I guess I will be getting a new car again soon…”


Chapter 6 – Specialization in the Semiconductor Industry

Chapter 6 – Specialization in the Semiconductor Industry
by Wally Rhines on 08-16-2019 at 6:00 am

Recently, the combined market share of the top ten and top twenty semiconductor companies has been increasing, contrary to the trend of the last fifty years.  Given the acceleration in mergers and acquisitions that began in 2015, one might assume that, as the semiconductor industry approaches maturity, companies are consolidating to increase their competitive advantage through economies of scale.  After all, that’s what many industries, including disk drives and DRAM’s have done in the past.  Closer examination of this trend, however, indicates that semiconductor companies are moving toward specialization rather than just bulking up to increase their revenue.  Let’s look at the top five largest semiconductor companies, where the consolidation is most evident.  The combined market share of these companies has been increasing in recent years as they grow at a 9% compound average growth rate (CAGR) versus a market that grew at 2% CAGR through 2017 (Figure 1). Did they grow by acquisition of other companies?  In general, “no”.

Figure 1.  Increasing combined market share of the five largest semiconductor companies

Despite acquisitions like Altera, Intel’s market share over the period from 2010 to 2016 was flat at about 15.5%. Samsung gained market share during the period, moving from 10.2 to 12.1% but this gain was not caused by acquisitions.  TSMC, the third largest semiconductor company by revenue, grew its market share substantially during the period, rising from 4.5 to 8.1% with no acquisitions.  And Qualcomm’s gain in market share from 3.1 to 4.2% was almost totally driven by the growth of its primary market, wireless telecommunications, rather than any acquisitions. Only Broadcom grew by acquisitions during the period, moving from 0.7 to 4.2% market share.

There were indeed companies that grew economies of scale through acquisitions during the period 2010 through 2016 but they are not a significant share of semiconductor industry revenue.  They include the TriQuint/RFMD merger to form QORVO, International Rectifier/Infineon, On Semiconductor/Fairchild, and Linear Technology/Analog Devices, to name some examples.  Overall data for the industry suggest that there is no correlation between operating profit and revenue, with a correlation coefficient of only 0.0706 (Figure 2).

Figure 2.  Lack of correlation between semiconductor revenue and operating profit of the largest semiconductor companies 2010 through 2016

Figure 3. Texas Instruments operating profit percent

Why then is there an accelerated level of semiconductor mergers and acquisitions in 2015 and 2016?  It turns out that companies that used acquisitions and divestitures to specialize their businesses usually improved operating profit percent more than those who did not.  Texas Instruments is a good example (Figure 3).  When I worked at TI in the 1970’s and 80’s, the company made almost every conceivable type of semiconductor component.  One could say that TI made everything in the semiconductor business except money. Through a series of acquisitions, divestitures and business terminations since the year 2000, TI has focused its business on analog and power components. As a result, TI has progressed from profitability that averaged less than 10% operating profit to a 40% operating profit in 2017, the highest of the major companies in the semiconductor industry.

Figure 4.  NXP Operating margin after adjustment for extraordinary items

NXP is another good example (Figure 4).  In 2014, nearly 30% of its revenue came from “standard products”.  Over the next five years, this percentage became negligible and more than 90% of NXP’s revenue then came from two major areas, automotive and security.

AVAGO is a similar story although the specialization was achieved by an aggressive series of acquisitions (Figure 5). Along with the acquisitions came divestitures resulting in very strong market share in wireless communications and networking, a specialization that was particularly good as “East-West” traffic grew in data centers. In addition, the need for improved wireless communications filters in cell phones accelerated the growth of bulk acoustic wave devices.

Figure 5.  AVAGO specialization through acquisitions

What about companies that did acquisitions in order to grow and diversify their product mix? Intel is a good example of a company that had an extremely high concentration of revenue in the microprocessor business aimed at PC’s and servers (Figure 6).  A series of acquisitions in new areas like McAfee for security, Wind River for embedded software, Altera for FPGA’s, as well as an organic diversification thrust with the foundry business, added to revenue but not to profit.

Figure 6.  Intel diversification versus profitability

Finally, one might wonder whether this high correlation of specialization with profitability came as a result of reductions in research and development, especially when one examines cases like AVAGO where substantial cost reductions followed each acquisition.  If this did happen, it’s not evident for the overall semiconductor industry.  The total R&D investment of the semiconductor industry has grown almost every year in history (Figure 7).

Figure 7.  Semiconductor research and development expenditures with recessions shown in gray

Research and development spending of the semiconductor industry has been relatively constant at 13.8% per year (Figure 8 in Chapter 2).  It appears that the managers and investors in semiconductor companies don’t believe that their industry is consolidating into a slow growth, mature business.  Why would they invest nearly 14% of their revenue each year if they believed that the recent compound average growth rate below 3% was likely to continue?  The semiconductor industry has reinvented itself periodically through history as new applications have evolved.  These new applications have created opportunities for new companies to emerge and for the total industry revenue to grow.  That’s likely to be the case for the foreseeable future.

Read the completed series


Can a hierarchical Test flow be used on a flat design?

Can a hierarchical Test flow be used on a flat design?
by Tom Simon on 08-15-2019 at 10:00 am

It is pretty common for physical layout to work from a flattened hierarchy for blocks or even full chips, even though the front-end design starts with a hierarchical representation. This was not always the case. Way back when, the physical layout matched the logical hierarchy during the design process. Of course, this led to all kinds of problems with placement and routing congestion. When the split was made to break hierarchical consistency from the front end to the back end it caused endless headaches. Even today, with most flows ironed out pretty well, there are still pain points in using a flat physical representation. Nevertheless, the advantages outweigh the drawbacks.

At the same time, some operations that were traditionally performed on flat designs have matured and can be done hierarchically for improved efficiency. A good example of this is DFT, where Mentor has introduced its Tessent Hierarchical DFT and Memory BIST solutions.  The obvious advantage is that DFT and memory BIST can be inserted at the block level, and then driven in the finished design through hierarchical connections to each subunit. When changes are needed, only the affected blocks require modification.

So, what happens when the physical design is flat, but the design is so large that performing DFT and inserting memory BIST takes too long? This is the issue that a recent Mentor case study examines in the flow used for an ON Semiconductor design with ~10 million gates and 300 memory instances. Using a flat DFT and memory BIST methodology took 9 hours, which meant that a design iteration could cost an entire day. ON Semiconductor worked with Mentor to devise a hybrid flow that let them keep their flat physical design and take advantage of the hierarchical efficiencies of Tessent Memory BIST.

The challenge was to take advantage of the efficiency of hierarchical memory BIST on a flat design.  Overlaid on this were the usual considerations for designing and grouping memory BIST controllers. Physical proximity plays a big role in deciding which memories can share controllers. Because it is better to test memories at their native speeds, grouped memories should be in the same clock domain. Running all memory test concurrently might exceed the available power dissipation capabilities, so decisions are necessary about which tests can be run in parallel. There are also algorithm and repair issues to sort out.

The solution that Mentor and ON Semiconductor arrived at was to partition the final flat physical design into 13 submodules and adding memory BIST to them using Tessent’s hierarchical flow. The test insertion time for each submodule is about 1.5 hours. Because they can be run in parallel, the overall runtime for the full chip went from 9 hours to 1.5 hours. Scan chain insertion was still done at the top level, however, DRC verification can be done at the submodule level, saving even more time – ~25%. IJTAG (IEEE 1687) was used at the chip level and Tessent MBIST was used to generate memory BIST patterns.

ON Semiconductor was pleased with this approach because they gained many of the advantages of a hierarchical test flow but did not have to go back and redesign the physical implementation. As I have said before, I like real world examples of how specific tools are beneficial. In this case the participation of ON Semiconductor shows the practical real-world value of the Tessent hierarchical flow in an interesting hybrid application. More details of the methodology are available in the case study which can be downloaded from the Mentor website.


Is Hong Kong a preview of Taiwan? Will HK embolden China to take Taiwan faster?

Is Hong Kong a preview of Taiwan? Will HK embolden China to take Taiwan faster?
by Robert Maire on 08-15-2019 at 6:00 am

On our recent Asian tour, Hong Kong was our last leg, arriving this past Friday and departing Monday, the day the airport stood still. We were on a 2:20PM flight out of Hong Kong back to the states which was one of the last flights to leave before the airport was shut down. Much like the China trade issue, the Hong Kong problem looks to be getting quickly out of hand for no good reason.

Fake news and censored news

When we arrived, Friday, there were a lot of protesters at the airport, all very peaceful, either sitting on the floor of arrivals or handing out literature stating their concerns and position regarding China’s assertion of control over Hong Kong. We easily walked through on the way to our taxi , taking some flyers along the way. It was certainly not the “rioters” or “terrorists” the Chinese government had described. In the previous week when watching CNN in our hotel in China, every time a segment about Hong Kong came on, the screen went dark & silent until the segment was over. We got uncensored news through a VPN connection to the US.

An excuse for a crackdown

It seems pretty clear that China was coming up with excuses to crack down more harshly on Hong Kong in their description of the protestors. There is now a show of force with a military convoy massing on the Shenzhen side of the Hong Kong border. We saw no protestors during our departure. The airport was shut down right after our departure. China probably wanted to have an excuse to react by complaining about economic damage caused from the airport being shuttered. The protestors went so far as to use thousands of “post-it” notes rather than permanent graffiti or anything that would cause damage at the airport.

Subsumed by the Chinese “Borg”-

The protests started over a proposed law to allow extradition of Hong Kong residents to greater China. Probably not much different than being sent to a gulag in the former Soviet Union. This was seen as the first major step in the elimination of the “one China, two systems” that has kept Hong Kong moderately free. It is clearly the first step of the final “absorption” process.

Taiwan should be scared to death

If we were watching the uncensored news coverage of Hong Kong in Taiwan we would be scared to death that we were next on the hit parade. China has made its intentions and views very, very clear about Taiwan, as it also has about the South China Sea.

The acceleration of the Hong Kong absorption and crackdown on resistance could be both a preview of Taiwan and a test of the US’s resolve to resist these moves. So far the isolationist reaction of the US administration has been more of an invitation for China to increase its pace rather than pause out of concern. If we were a Taiwanese resident, that reaction from the US government would make me twice as scared, that the US would drop Taiwan as quickly as Hong Kong and say “Taiwan was part of China anyway”…..or maybe use Taiwan as a bargaining chip in trade negotiations.

The entire chip industry is at risk

If we were in the Chinese administration and upset that our fledgling chip industry was being attacked and choked off by the imperialist US that sounds like a good enough excuse to react defensively and move up my timeframe to take back Taiwan. Yes, it might be a little higher profile than Hong Kong but what’s the US gonna do about it anyway? Whimper a little bit?

Taiwan is too large a prize to be ignored in the “made in China 2025” game plan as it becomes the ultimate “checkmate” against the US as TSMC is now the leader in chip technology and there is more than enough memory technology in Taiwan as well from Micron. It is a clear existential risk to the US and global semiconductor industry.

“When they came for me, there was no one left to help”

The near term reaction to the Hong Kong issue is much, much larger than just Hong Kong as it is another milestone larger, longer term march. The reaction to this milestone will likely have strong implications for China’s ambitions in many other areas.

While it may be hard for the US to throw stones from their own imperialist, dominant, crystal palace it does seem to be the right thing to do.

The stocks

Hong Kong ratchets up the trade war with China to an even hotter level. If the US does react to Hong Kong the Chinese could further up the ante in the trade war. If the US doesn’t react, its like an open invitation to start absorbing Taiwan sooner rather than later. Not a great choice.

This suggests that the trade issue will remain hot for the near term and continue to negatively pressure stocks, especially tech stocks and semi stocks.

There could also be some direct pressure on companies impacted by problems in Hong Kong or nearby Shenzhen that uses Hong Kong as a shipping port. Some US tech companies, such as AEIS, do a lot of their manufacturing in Shenzhen. However, we are sure that China won’t let Hong Kong impact Shenzhen for long as Shenzhen is a crown jewel in China’s tech ambition.


US-China decoupling and the semiconductor industry – who gets hurt?

US-China decoupling and the semiconductor industry – who gets hurt?
by Bart van Hezewijk on 08-14-2019 at 10:00 am

On November 7 last year, Henry M. Paulson, Jr., Chairman of the Paulson Institute and former Secretary of the US Treasury gave a speech in Singapore about the growing tension between the United States and China and warned that “an economic iron curtain” is a very real possibility as a result of a decoupling between the United States and China. Chinese as well as foreign media have since then written about a possible US-China decoupling resulting in a variety of opinions about the matter.

“Other countries are being forced into an unwelcome choice. In a win-lose world, you are either with America or you are with China.” – Edward Luce, Washington columnist and commentator for the Financial Times; Financial Times, December 20

“It is utterly unrealistic to uncouple China and the US economically. The two economies are symbiotically connected and are too interdependent to be pried apart.” – Sourabh Gupta, senior fellow at the Washington-based Institute for China-America Studies; Xinhua, June 4

“The trade war from our side is primarily about decoupling China from the US supply chain. I get it. But these policies that Trump is pursuing also gives the rest of the world an argument to decouple from the US.” – John Scannapieco, shareholder Nashville office Baker Donelson law firm; Forbes, June 26

“Decoupling could be seen as ‘strategic blackmail’ for Washington to try to prevent China from growing stronger.” – Li Xiangyang, director National Institute of International Strategy, Chinese Academy of Social Sciences; South China Morning Post, July 7

“Beijing could work more with its Asian neighbors to prepare for a possible decoupling with the United States.” – Sun Jie, Researcher Institute of World Economics and Politics, Chinese Academy of Social Sciences; South China Morning Post, July 7

Mr. Paulson also reflected on his own speech in February this year when he addressed the Center for Strategic and International Studies (CSIS): “Technology is an integral part of business success, blurring the lines between economic competitiveness and national security. The result is that, after forty years of integration, a surprising number of political and thought leaders on both sides advocate policies that could forcibly de-integrate the two countries … We need to consider the possibility that the integration of global innovation ecosystems will collapse as a result of mutual efforts by the United States and China to exclude one another. [This] could further harm global innovation, not to mention the competitiveness of American firms around the world. But more than that, I am convinced that it has the potential to harm the United States in ways that too few people in Washington seem to take seriously: They’re focused on finding ways to hurt China and attenuate its technological progress in advanced and emerging industries. But they’re less focused than they should be on what that effort might mean for America’s own technological progress and economic competitiveness.”

The potential damage the US government’s actions in the trade war could do to global innovation ecosystems as well as American companies is particularly relevant in the semiconductor industry. The two most prominent targets of the US government’s actions have been ZTE and Huawei. After Huawei was put on the US Department of Commerce’s Entity List in May, the Chinese Ministry of Commerce announced it would publish its own list of ‘unreliable entities’. Although no such list has been published yet, American semiconductor companies such as Qualcomm and Intel, who had already cut off their supplies to Huawei, could potentially be targeted.

But even without being listed as an ‘unreliable entity’ by the Chinese government, the consequences of US government actions could hurt US semiconductor companies. A staggering 67% of Qualcomm’s revenue comes from China, for Micron this is 57%, and for Broadcom 49%. These three companies’ combined revenue from China was US$ 42.8 bn in 2018. It is no surprise that the American Semiconductor Industry Association (SIA) told the Trump administration that the sanctions against Huawei risked cutting off its members from their largest market and hurting their ability to invest. US-based Qorvo indicatedthat sales to Huawei accounted for 15% of its total annual revenue (US$ 3bn) and US company Lumentum said that Huawei accounted for 18% of their revenue in Q1 2019.

Especially in an industry where R&D is not only necessary but also very costly, losing revenue from China will hurt the technological development and competitiveness of American firms. To get an idea about the potential impact of a US-China decoupling in the semiconductor industry, I analysed semiconductor companies’ revenue shares from the US and China. Looking at the annual reports of seven of the largest semiconductor equipment companies, all but one (Dutch lithography equipment maker ASML) sell more to China than to the US (see Figure 1). The China-US ‘sales balance’ for Applied Materials shows that their revenue from China is 3.4 times as much as their revenue from the US.

For the other three American (Lam Research, KLA and Teradyne) and two Japanese (Dainippon Screen and Tokyo Electron) equipment makers, the sales balance also favours China. One possible explanation that ASML’s revenue from the US is (a little) more than their revenue from China, is that their most expensive (EUV) equipment is used for the most advanced technology nodes. It is likely that they sell these more to the leading chipmakers (such as Intel in the US) than to (less advanced) foundries in China.

For the combined revenue of these seven equipment makers (US$ 60 bn), the China part is 1.9 times as much as that from the US. Sales to China and the US represent about one third of their total sales on average, South Korea and Taiwan (and to a lesser extent Japan) being the most important other sales regions for these companies.

Figure 1: China-US sales balance semiconductor equipment

I did the same exercise for eight of the biggest semiconductor suppliers in the world (see Figure 2). For the selected companies, six from the US (Qualcomm, Micron, Broadcom, Texas Instruments, Nvidia and Intel), one from the Netherlands (NXP), and one from South Korea (SK Hynix), the numbers are even more striking. US-based Qualcomm’s China revenue is more than 25 times as much as its US revenue.

Although the other companies’ sales balance does not even get close to Qualcomm’s, another four of these companies sell more than 3 times as much to China as they do to the US. Actually, for all these eight major semiconductor suppliers their China sales is more than their US sales.

For the combined revenue of these eight semiconductor suppliers (US$ 218 bn), the China part is 2.3 times as much as that from the US. For these companies, sales to China and the US represents on average 60 percent of their total sales. Two (non-US) companies that are often mentioned in the semiconductor suppliers top 10 are not included in this analysis. South Korea’s Samsung is not included because it does not present a geographical distribution for its semiconductor business (US$ 77.2 bn) in its annual report. Taiwan-headquartered TSMC is left out as it is the only pure play foundry among these companies.

Figure 2: China-US sales balance semiconductor suppliers

It is no surprise that for all global leaders in the semiconductor industry both China and the US are extremely important markets. What becomes clear here though, is that for 14 out of 15 of the largest semiconductor companies in the world, including all 10 American companies, their sales in China is (sometimes much) more than their sales in the US.

For the 10 American semiconductor companies included in this analysis, their combined revenue from China (US$ 79.3 bn) is 2.8 times as much as their combined revenue from the US (US$ 28.1 bn).

The Chinese government has made no secret of its ambitions to further develop the domestic semiconductor industry. For instance, by establishing the China Integrated Circuit Industry Investment Fund (CICIIF or ‘Big Fund’) in 2014 and setting ambitious targets for the semiconductor industry in the “Made in China 2025” plan. With setting up the Big Fund, the Chinese government envisioned spending more than US$ 160 bn over 10 years to stimulate developments in semiconductor design and manufacturing and one objective of Made in China 2025 is to increase China’s self-sufficiency in chip production to 40% in 2020 and 70% by 2025. Ding Wenwu, President of the Big Fund, already acknowledged 1.5 years ago that this catching up will not be an easy task: “How can one overtake the front-runners when lagging so far behind? Not to mention the leaders are trying very hard to keep their position.”

Looking at the 2018 data, the only conclusion can be that there is still a long way to go for China to reach the desired levels of domestic chip production. IC insights calculated that total chip production in China by Chinese headquartered companies accounted for only 4.2% of the domestic demand in 2018.

Adding the production by foreign companies in China, this number rises to 15.5%. More than 70% of the chips made in China are made in foreign companies’ fabs. Being the largest consumer of chips, China is still very much dependent on importing them. Even with chip production of foreign companies in China included, it will be very challenging to achieve the ambitious goals of self-sufficiency.

In the semiconductor industry’s globally integrated network, policies aimed at decoupling will not help anyone. China needs fabs of foreign (including US) companies to reach its targets of ‘domestic’ production. The semiconductor equipment leaders are American, Japanese and Dutch companies, and increasing production without equipment from these countries seems impossible. On the other hand, these companies also need their revenue from China to be able to invest in R&D and keep innovating. This is even more true for the largest semiconductor suppliers in the world such as Intel, Micron, Qualcomm, Broadcom and Texas Instruments. SIA mentioned in their April 2019 report Winning the Future – A Blueprint for Sustained US Leadership in Semiconductor Technology: “The US semiconductor industry already invests heavily in its own research and development to stay competitive and maintain its technology leadership. Nearly one-fifth of US semiconductor industry revenue is invested in R&D.”

Many (most) of the largest semiconductors suppliers in the world are American companies, but any policy that diminishes their China revenue will definitely hurt their competitiveness. According to the SIA report, “Semiconductors are America’s fourth-largest export, contributing positively to America’s trade balance for the past 20 years. More than 80 percent of revenues of US semiconductor companies are from sales overseas. Revenue from global sales sustains the 1.25 million semiconductor-supported jobs in the US, and is vital to supporting the high level of research and development necessary to remain competitive.”

Banning US companies from doing business with Chinese semiconductor companies will indeed delay the development of the semiconductor industry in China. But that’s just one side of the story. It will also hurt American (and other) companies’ competitiveness, as Mr. Paulson argued in his speech about a US-China decoupling and the possibility of an ‘economic iron curtain’. I hope this article gave some quantitative insights on how much the US and China, and all countries in the global semiconductor value chain, are dependent on each other to keep achieving technological progress. So please allow me to end with a quote by Ken Wilcox, Chairman Emeritus of Silicon Valley Bank, one of the experts interviewed in the (highly recommended) film “Trump’s Trade War” by Frontline and NPR:

“If your goal is to stop China from advancing, you’re not going to accomplish that anyway. Because they’ll just innovate around you.

Why would you want to stop anybody from making progress? … The better goal is for us to spend time on becoming more powerful ourselves.


Mentor-Tanner Illuminate MEMS Sensing, Fusion

Mentor-Tanner Illuminate MEMS Sensing, Fusion
by Bernard Murphy on 08-14-2019 at 6:00 am

I enjoy learning and writing about new technologies closely connected to our personal and working lives (the kind you could explain to your Mom or a neighbor). So naturally I’m interested in AI, communication and security as applied to the home automation, transportation, virtual, augmented and mixed reality, industry and so on – the burgeoning electronification of our world.

But there’s been a glaring gap in my coverage. All of this clever technology would have little or no value if it couldn’t sense key environmental factors – our orientation, acceleration, temperature, pressure, sound – the list is endless. This is a rich universe of innovation that, so far, I have only looked at peripherally. Now I’m getting more interested in this domain, and I’m starting with how Mentor/Tanner supports the design of the key elements at the heart of these sensors – microelectromechanical systems or MEMS – and the circuits that connect those systems to the rest of the electronics.

Take microphones. Not karaoke mics, but the ones you’ll find in smartphones, Amazon Echo speakers and the latest generation earbuds and hearing aids, supporting “Siri” and similar commands. Many of these are made by Knowles (about who I’ve written recently). The mic is a MEMS design, fabricated on silicon, but it doesn’t look much like conventional silicon circuits.

One basic structure might be a couple of membranes separated by a layer of air; as sound waves hit one of the membranes it flexes, changing the capacitance between the membranes. A different structure depends on sound waves flexing piezo-electric mechanical structures, generating electrical signals. Either way, analog circuitry next to the device detects the response. Then all that’s left is a little signal conditioning, converting the analog signal to digital for better noise immunity and you have yourself a microphone.

OK, gross oversimplification but that’s the general principal and it highlights one key aspect of MEMS: they are (at minimum) micromechanical devices with gaps between layers, cantilevered beams or host of other possibilities, all designed to respond to targeted ambient influences in a way that can be converted into electrical signals and passed on for analysis. Another big difference from conventional semiconductor lithography is that shapes of elements in these devices can be pretty much anything (Knowles microphones use circular membranes for example), where more conventional lithography at these dimensions is rectlinear.

As you might imagine, while manufacturing techniques for MEMS may start with the same techniques used for conventional semiconductors, these must be augmented with specialized micro-machining processes. Designing these systems is also challenging. If you thought analog + digital was hard, here you have to design and verify mechanical + analog + digital systems. Tanner is well-known for its expertise in this domain and certainly seems to be widely used. They cite Knowles and MEMSIC as customers; I’m sure there are more they haven’t chosen to share.

The tool-suite supports an interesting parallel flow – for MEMS, for analog and for digital circuitry (these may or may not be on the same die). I won’t bore you with the analog and digital flows except to mention that Mentor/Tanner have tools to support each step of design/analysis and implementation. For MEMS system design and simulation Tanner provides MEMS PRO. System simulation uses the Tanner circuit sim tools together with a behavioral model of the MEMS, derivable directly from your design when you start design from their library of composable models. And of course you can run full mixed-level simulation for all three domains.

For implementation, again Mentor Tanner provide tools for analog and digital flows. They don’t mention compatibility with 3rd-party flows though I’d guess they make some allowance for this, since the MEMS layout is unlikely to intermix with circuit layout. The MEMS implementation is of course Tanner-based, including evolving the design to 3D-solid modeling based on the target foundry and process. This model can then be taken into detailed finite-element analysis for mechanical, thermal, electromagnetic and fluidics analysis. And for layout verification, the Tanner tools include DRC checking which checks for the special features unique to MEMS layouts. (I’m assuming you’ll still use Calibre for the rest of the circuitry.)

The economics behind delivering sensors as packaged parts is also interesting. Expected unit prices for individual sensors are very low (<$1) so sensor-makers are motivated to combine multiple sensor functions. Combining a 3-axis gyroscope, 3-axis accelerometer and 3-axis compass in one device gives you a 9-axis sensor, especially important in VR and similar functions.

Taking this further, smart sensors are becoming routine in ADAS, combining one or more sensing functions with a controller to locally process and reduce data along with a communication interface. This may be more to reduce traffic on auto ethernets than to improve vendor ASPs, but the outcome is the same. Sensor fusion is a hot topic and that means more sensors and more functionality around those sensors must be integrated into a package.

Of course the Tanner tools are not only valuable for sensors. You can also build actuators, pumps, whatever you might need. Maybe that will be another blog. Meantime you can learn more about how Tanner/Mentor enable building these sensors from these two white papers: Sensors are Fundamental to New Intelligent Systems and Autonomous Drive Requires Smart Sensor Systems.

 


eSilicon Brings a New Software Interface to its 7nm neuASIC Machine Learning Platform at Hot Chips

eSilicon Brings a New Software Interface to its 7nm neuASIC Machine Learning Platform at Hot Chips
by Randy Smith on 08-13-2019 at 10:00 am

Figure 1: NeuASIC Platform Architecture

In early May of this year, eSilicon announced the tape-out of a test chip which included the latest additions to its neuASIC™ IP platform. At the upcoming Hot Chips Symposium to be held at Stanford on August 19 and 20, 2019, eSilicon will be demonstrating the software component of this AI-enabling IP platform. At the event, eSilicon will be giving live demonstrations of its AI Accelerator tool.  Late registration is still available for the event. The day before the symposium, various tutorials will be available at the same location. For more information on attending the Hot Chips, check the event website.

I learned first-hand of eSilicon’s ASIC design expertise more than 17 years ago when 2Wire, which had licensed the TriMedia VLIW core from me, asked me to find them help in designing their chip. I made the introduction to Jack Harding, eSilicon’s CEO, so I felt responsible for the success of that chip. eSilicon came through wonderfully and the chip, which became the engine inside AT&T’s initial UVerse residential gateway device, became enormously successful. Nearly two decades later, TriMedia has spun back into Philips Semiconductor (who then got acquired), 2Wire is now part of Pace, and AT&T is still a substantial player in the residential internet market – while eSilicon has continued to grow and is now much more than just a fabless ASIC company. Their capabilities and expertise now extend into 2.5D packaging, high-value IP (see my recent blog on their PAM4 SerDes IP) and more.

The neuASIC platform seeks to fill a void in the ASIC market for machine learning. Some of the reasons given for the challenges of an ASIC solution for this segment is that AI/ML algorithms are seemingly in near-constant flux. This uncertainty has made it difficult to design an ASIC chip and know that it will still be appropriate by the time the final product is to be shipped.

The neuASIC platform, available in 7nm FinFET technology, addresses this challenge with a modular design methodology. The solution provided utilizes a library of AI-specific tiles (i.e., macroblocks) that are quickly and easily configured to support the designer’s AI algorithm. These blocks, which are from eSilicon and other IP providers, can be configured using eSilicon’s AI Accelerator. This software will map high-level AI workloads expressed in languages such as TensorFlow to the neuASIC platform and do a quick PPA estimate for the algorithm in the resultant silicon implementation. Having this platform, including the software, allows design exploration of candidate architectures to ensure the design will be within the target specifications. While this architecture is quite flexible, the approach also supports algorithm changes that result in minor chip modifications such as tile changes or modifications to the 2.5D package used to accommodate changes in memory components. More details of the neuASIC platform are available here.

Following are some screenshots of a beta release of AI Accelerator. This will give you a feeling for the tool UI and flow.

Figure 2: AI Accelerator user interface
Figure 3: Graphical output of AI Accelerator
Figure 4: Parametric output of AI Accelerator

eSilicon has made a smart move in providing this software for configuring their neuASIC platform. Increasingly, designers are expecting software interfaces to aid them in configuring IP. Asking someone to code RTL with the correct parameters, or to follow instructions in a manual or user guide ignores the sometimes-complex interactions between the various options. The software approach is easier to use and less error-prone.

The AI Accelerator is part of eSilicon’s online Navigator environment for browsing its IP. Visit eSilicon’s STAR login page to access Navigator, or to sign up for a new account. It’s free. AI Accelerator will be available online when Hot Chips begins. In addition to pbtxt (TensorFlow), the software supports prototxt (Caffe) and json or yaml (Keras), used for machine learning applications such as neural networks.

As eSilicon is a Silver sponsor of Hot Chips, they will be providing a demonstration of the AI Accelerator at a tabletop in the main lobby of the event (there are no booths at Hot Chips). In addition to the demo of AI Accelerator, eSilicon will be discussing the content of a new white paper on Chiplets.


Accelerate Your Early Design Recon

Accelerate Your Early Design Recon
by Alex Tan on 08-13-2019 at 6:00 am

A product launch nowadays demands shorter runway. SoC designers challenges are not so much in facing the unavailability of proven design capture methodologies or IP’s that could satisfy their product requirements, but more so in orchestrating the integration of all those components to deliver the targeted functionalities and performances.

While facilities for performing faster exploration by means of advanced modeling and prototyping during system architecture inception are widely accessible, shortening product development time requires scrutiny of long implementation processes. For example, high-level synthesis (HLS) has become mainstream and has provided a facility for exploration of system architecture configuration, power consumption, and design footprint versus feature tradeoffs. On the other hand, both functional and physical verification –which frequently dictate the success of the product launch, incur a late start due to the immaturity of most design blocks.

The motivation for early integration
With the plethora of AI and IoT oriented designs, the SoC integration efforts need to have an early start to align with a shorter product launch cycle. This translates to the need for having design methodology and point tools capable of providing early exploration and access to baseline metrics that highlight potential design integration issues.

A common chip design approach is to have concurrent chip integration and block development to minimize the number of DRC iterations as illustrated in figure 1. The main challenge to early chip-level physical verification includes an artificially high number of reported DRC violations of the unfinished design blocks –which in major part attributed to the widespread occurrences of systematic issues such as off-grid placement; incorrect via type on a clock net; incorrect routing layer and orientation of IPs.

 

Additionally, it is also a complex task to segregate between block-level and top-level routing violations as topological changes (block-level pin to the top-level net), physical constraints (routing resources) and its associated DRC rules might introduce variants in the generated violations. Likewise, to use the default settings in foundry rule decks for initial DRC runs also leads to long runtimes, a massive number of DRC errors and large database.

To enable design teams to start an early integration exploration while performing physical verification of their full-chip design layout, Mentor recently introduced Calibre™ Reconnaissance (in short, Calibre Recon). The tool is designated to effectively identify potential integration issues and generate quick feedback for corrective actions to the design teams, that eventually lead to lowering the DRC iterations and reducing physical verification time for tapeout closure.

In order to assess Calibre Recon effectiveness, we look into several areas: How it tackles with myriads of design rules while dealing with dirty designs?  What about runtime? Physical verification notoriously incurs long turnaround time.  How is it compared with the existing approach across designs?

Reduction in Rule Checks and Violations
Selecting relevant design rules is a daunting task as some rules may be important but may have long runtime in the presence of error. How does the designer know which ones to activate or to exclude? Which ones that will trigger advanced analysis? Indiscriminate selection of more categories such as antenna checks or all connectivity checks may also irrelevant for the current development phase and eventually produced sub-optimal results.

Calibre Recon automates the deselection process and decides based on the check type and the number of operations executed for the check. It aims for optimal coverage with quick runtime and less memory footprint. On average it cut the number of checks by half across various process nodes. The resulting deselection checks and categories are captured in the transcript for further reference. It honors the user’s manually pre-filtered checks/categories. As shown in figure 3, the total reports violations are reduced by as much as 70% of the original count and they facilitate the analysis and debugging of real systematic issues.

Runtime, Gray Boxing and nmDRC
Calibre Recon supports both early block-level verification and chip-level validation as these two types of efforts are usually done in parallel by design teams. Having top-level context feedback from Calibre Recon allows block designer to fix systematic reported issues and provides more productive time for block designers to concurrently focus on cleaning up the remaining rules violations internal to their blocks. As shown in figure 4, running the Calibre Recon tool on blocks (tiles) during initial routing resulted in up to 8x runtime improvement at 4x less memory.

Another critical part of the integration work involves checking the consistencies of interfaces across design blocks and IP’s. An unfinished design block can be fitted into the top-level scope as a grey box to allow the designer to focus on interface and top-level routing checks while ignoring some internal block details. It allows topology-accurate inclusion of all design blocks or IPs at the top-level view and permits a more meaningful top-level floorplanning and timing assessment.

The gray box approach may optionally be used in conjunction with Calibre Auto-Waivers functionality. It helps in preventing new DRC violations due to removing geometries from the affected cells. This is achieved by waiving any violations introduced by excluding regions from the specified cells and all waived violations are saved to a waiver results database files for later review. The grey box solution isolates integration and routing violations associated with the assembly from the immature block violations.

Mentor Calibre Recon tool was proven to reduce the overall DRC runtime by up to 14x, while covers about 50% of the total DRC set. Its application during early integration accelerates the overall design recon and provides early design integration metrics for successful tapeout.  For more details and chart of its application across a variety of chips, please check HERE.


Webinar: Designing Complex SoCs and Dealing with Multiple File Formats

Webinar: Designing Complex SoCs and Dealing with Multiple File Formats
by Daniel Payne on 08-12-2019 at 10:00 am

StarVision Pro

In SoC design it’s all about managing complexity through modeling, and the models that make up IC designs come in a wide range of file formats like:

  • Transistor-level , SPICE
  • Interconnect parasitics, SPEF
  • Gate and RTL, Verilog, VHDL

Even with standard file formats, designers still have to traverse the hierarchy to find out how everything is connected. IP reuse is here to stay, yet the challenge with using hundreds of IP blocks is finding out how they are all connected and to ensure consistency of signal naming conventions, and making sure that no pins are un-connected or misconnected. The debug process alone is time consuming and tedious, especially if you don’t have a specialized tool designed for the task.

Fortunately there is hope, because Concept Engineering has been in the forefront of providing engineers with a visualizing and debugging tool called StarVision Pro. There’s a webinar scheduled for Thursday, September 12th at 10AM PDT that will help engineers debug quicker by:

  • VISUALIZE: Render schematics on the fly for VHDL/Verilog/Spice level netlists to understand the function of design easily.
  • PRUNE: Extract, navigate and save critical timing paths/fragments of design as Verilog/Spice/SPEF netlists for reuse as IP or external use in partial simulation
  • CLOCK DOMAIN ANALYZER: Visualize and detect different clock domains in the design.
  • CROSS-PROBE: Drag & drop selected components/nets between all design views to cross probe and shorten debug time, especially during tape-out for full-chip debug. Also the ability to cross-probe analog and digital simulation data on the netlist.
  • PARASITIC: Visualize and analyze parasitic networks and create SPICE netlists for critical path simulation.
  • NETLIST REDUCTION: Instantly turn off/on parasitic structures in SPICE circuits for better comprehension of CMOS function
  • SKILL EXPORT: Export schematics and schematic fragments into Cadence Virtuoso.
  • SOC OR MIXED SIGNAL DESIGN: Visualize, Debug and Analyze the RTL, GATE, and SPICE Design in one cockpit!
  • DOCUMENT: Generate design statistics & reports: Instance & primitive counts
  • TCL API: Extend the functionality of StarVision to match project needs by interfacing with the open database through TCL scripts and in batch mode
  • IDENTIFY DIFFERENCES IN SCHEMATICS: Extend the capabilities of the tool to identify differences between designs.

At my last EDA company we used StarVision Pro to inspect SPICE netlists with extracted parasitics in order to understand connectivity and debug circuit simulation results, and this tool saved us hours of effort versus manually tracing and creating a schematic from a netlist. Why worker harder when you can work smarter?

Register online here.

Related Blogs


Steve Jobs, NeXT Computer, and Apple

Steve Jobs, NeXT Computer, and Apple
by John East on 08-12-2019 at 6:00 am

From time to time I give presentations to various audiences:  Silicon Valley the Way I Saw It.  I always enjoy doing that.  One particular section always makes me stop and think.  “Who was this guy? How did he do what he did? Why didn’t I do that?  “What made him so special?” It’s the Steve Jobs section. I met twice with Steve Jobs when I was working at AMD.  He was a fascinating human being. But, the stories about my meetings with him pale compared to the stories about Steve himself.  So – let’s spend some time going through a little bit of Apple / Steve Jobs history. Apple was formed in 1976 by Steve Jobs and Steve Wozniak.  They both went to high school a couple of miles from where I live, but we didn’t cross paths until much later in life. Their first real product, the Apple 2, did very well — particularly in the educational market.  They went public in 1980. Jobs made two hundred million dollars at the IPO. They kept growing and by 1984 their annual sales were approaching one billion dollars. What a success story!!

In 1983 the Apple board decided that they wanted an experienced CEO.  The view was that selling computers should not be a high tech sell — it should be a consumer sell    — more like selling refrigerators than selling mainframe computers. They wanted someone with those skills.  They also wanted someone more mature than Steve who was still under 30, had no previous experience at managing anything or anybody before Apple,  and was known to be a difficult person to work with. After an extensive search, they found John Scully, the VP of marketing at Pepsi Cola Company. Jobs loved Scully.  Scully loved Jobs. Apple really wanted Scully to join them. At first he was hesitant, but eventually Jobs convinced him to take the job using the famous line, “Do you want a chance to change the world, or do you want to spend the rest of your life selling sugared water to kids?”

In 1983, of course, DOS based personal computers (IBM PCs and their clones) were a lot closer to mainframe computers than they were to refrigerators.  In order to use them, you had to talk to the computer using DOS. You had to memorize a cumbersome set of instructions that were part and parcel of DOS.  A few examples of talking to a DOS machine:

DISKCOPY [drive1: [drive2:]] [/1] [/V] [/M]

move stats.doc, morestats.doc c:\statistics

type hope.txt | more

What the heck did those mean?  Who would want to learn that language?  And, of course, the syntax demands were exacting.  Any misspelling, extra space or incorrect punctuation would confuse the system.  Jobs had an answer for this. The mouse. The point and click, drag and drop interface that he had seen at Xerox PARC.  That interface was the critical feature of the soon to be announced MAC. Jobs didn’t invent point and click, but he recognized its beauty.  At his funeral in 2011 his wife Laurene said, “Steve had the ability to see what wasn’t there and what was possible.” The mouse was a perfect example of that.  She also reminded the room of something that Steve had told her. “If a customer is too stupid to use an Apple product, then it’s not the customer that’s stupid. It’s Apple that’s stupid!”.  Again — the mouse was a perfect example of how Steve thought.

Scully came aboard, of course, and at first he and Jobs worked together well.  They announced the first MAC in January of 1984 during the Super Bowl. (The 49ers beat the Miami Dolphins 38 – 16.  The game was at Stanford Stadium. I was there!!). The MAC used a mouse. The mouse made the MAC much, much easier for the layman to use than an IBM or IBM clone. The TV commercial that ran during the Super Bowl, which featured a lady javelin thrower in the days of George Orwell’s “Big Brother”, won any number of awards. Even today it’s viewed by many as the best commercial ever made.  The relationship between Jobs and Sculley was great! But then it started to fall apart. Soon it got to the point where the board had to choose between Jobs and Sculley. They chose Sculley. Jobs wasn’t fired, but he was demoted to a position he wasn’t willing to accept. He left shortly afterward in 1985.

When he resigned, he told the Apple board not only that he was leaving, but also,    “I’m taking five people with me” The big problem with that? He hadn’t told the five people he was going to do that.  They hadn’t yet agreed to join him. One of the five was Rich Page. Rich was one of four Apple fellows (“fellow” is the highest rank that an engineer can attain.  Steve Wozniak was also an Apple fellow.). Rich is still active in the Valley today doing angel investing and board work. He (Rich) told me that the five were absolutely shocked when they found out what Steve had done.  But — Steve really had the power of persuasion! Eventually all five signed up and left with Steve to form NeXT Computer. Steve had a concept in his mind that would lead to what he thought would be the perfect computer for schools.  The right feature set, the right price point, the right introduction date. It was going to be perfect!! 

“The ability to see what wasn’t there and what was possible.”  —– At NeXT he saw “what wasn’t there”, but he missed on the “what was possible” part.  He envisioned a great feature set, a great looking package, and a great GUI …… But those didn’t mesh at all with the great schedule and the great cost point that he had in mind.  Steve once said that creativity came from saying no 1000 times. He did that at NeXT, but it didn’t pay off. The machine came out late and with an unacceptably high price. Nobody wanted it.

NeXT was failing.

Next week:  My meetings with Jobs

See the entire John East series HERE.