webinar banner2025 (1)

The Korea/Japan trade war benefits US more than US/China hurts US

The Korea/Japan trade war benefits US more than US/China hurts US
by Robert Maire on 09-18-2019 at 10:00 am

Investors underestimate Japan/Korea trade war, US chip equip companies get collateral benefit, Long term share shifts in China & Korea, Creates near term upside for US companies, Memory Chip Price “Mirage”.
The lesser known trade war
While the market and the news has been pre-occupied with the ongoing soap opera of the US-China trade war and its star power performers, Trump & Xi, there has been an escalating war going on between Japan and Korea that may have as much or more impact for some companies as compared to the US-China trade war.
We too, have been guilty of talking about China for several years now as they are clearing vying with the US for technology dominance and it has the importance of that struggle on a global stage.
While the trade war between Korea and Japan may not be for global dominance as neither country is on par with the US or China, it is still a blood sport with very important consequences for global markets, company profitability and national honor.
Long memories- back to WW2
The issues between Japan and Korea go back to World War 2 and prior as Japan ruled over Korea from 1910 to 1945. The issue of “comfort women” continues as the main symbol of the animosity.
This came to a head when the Korean supreme court recently allowed Koreans to sue Japan over damages from the “forced labor” in WW2.
Japan retaliates where it hurts- in semiconductors
Japan’s response was to restrict exports to Korea of key resist and etching gases that are critical to manufacturing both semiconductor and flat panel displays.  Korea is almost 100% reliant on Japan for this and semiconductors and flat panels and associated products are clearly the lifeblood of Korea’s economy.
Take a look at Samsung’s quarterly reports and you understand how bad things would be without chips.
Thus Japan’s restriction is in essence a “death threat” to the entire Korean economy.
The reaction in Korea has been swift as expected
The reaction in Korea is about the same as China’s reaction to the death threats against Huawei by the US and the actual death of the Jinhua memory fab at the hands of the US….Korea has gone ballistic….
Both the Korean government as well as Korean businesses are now hell bent on fixing the dependency on Japan but more importantly going many steps further and cutting off as much business from Japan as is possible, as quickly as possible.
Korea has choices that China doesn’t

Korea has the ability to find other vendors to replace Japanese companies especially in the semiconductor business.

Tokyo Electron which is the second largest semiconductor equipment maker in the world after Applied Materials stands to lose a whole lot of business as the Koreans can choose from Lam, Applied, ASM International and a number of domestic suppliers such as Semes. They could even go to China to the recent IPO, AMEC.
China’s Jinhua didn’t have this many alternatives to stay alive. China still wants to do business with the US and the US with China but in Korea, its blood & personal pride against Japan.  The trade war between the US and China is exclusively about money without a lot of other emotion mixed in.
 
US semi equipment gets collateral benefit
Much of the deposition and etch equipment used in semiconductor production, especially for non critical applications, is a bit of a commodity with pricing being a significant selection factor.   TEL has picked up a lot of share in recent times and the Japanese have a long history of cutting prices to the bone just to retain customers and employment. Many times US companies bow out of the “race to the bottom” against Japanese companies.
However we are now in a situation where US and other non-Japanese equipment companies may not have to have the lowest price to get the business and perhaps will win business just by being close enough that Korean customers can avoid the distaste of having to buy a Japanese product.
 
Near and long term benefit
We think the impact of avoiding Japanese products has already been started as we think some companies we know have already won business they otherwise wouldn’t have gotten.
Korea has also made it clear that they are re-evaluating their long term supply partners so the anti-Japan sentiment of today will likely be institutionalized for the future.
Although we think there was both long and short term damage to US and China relations due to the trade war, we think there will be less long term damage once the US and China work out a trade agreement whereas animosity will remain between Japan and Korea as it has festered just below the surface for many, many years.
 
US companies could see pick up without memory recovery
While we still need memory to recover to get a true recovery, we think there could be enough share shift between US and Japanese companies in Korea to see a meaningful uptick in business without memory capex coming back.   Korea has also been the biggest buyer of Japanese equipment and if much of that buying goes elsewhere (mainly to the US) it will look a lot like a broad based spending recovery.
We think the potential gains in Korea could out weigh potential losses in China so it would be a net win for US companies. While China is not far from being a bigger spender than Korea in semiconductor equipment we think the US will gain more in Korea than it can possibly lose in China in the long run.
The “Memory Mirage”
We would caution investors not to read too much into memory pricing given the backlog of idled capacity in all the fabs.
More importantly, we think there has been a bit of an “artificial” inflation of memory pricing due to concerns about Korean production being curtailed due to Japan cutting off key materials.
In short we think memory users have been “stocking up” to offset any supply disruption in memory and as such the recent stabilization in pricing/demand for memory may have a lot more to do with “stockpiling” than a true recovery or stabilization.
If Japan and Korea work out their differences, at least in critical materials, the stockpiled memory and artificial buying unwinding could cause another blip or return to a weaker environment.
The stocks
We think that most semi equipment stocks in the US should see benefit with Lam being perhaps one of the biggest winners as it had also been the biggest loser on the way down. KLA in the US and ASML in Europe are perhaps the least impacted as there is minimal Japanese competition.
Applied Materials will also clearly benefit as it does not yet own Kokusai and is not seen as a Japanese company in Korea. Perhaps KKR is the lucky one here unloading a largely commodity company just before a trade war may cut off one of Kokusai’s largest markets.  It certainly makes the Kokusai deal look even more costly than it already looked….maybe Applied can back out or re-negotiate.
We think that we may already hear of positive benefit when companies report their current, September quarter. Although we are still far away from a memory recovery a pick up from share shift in Korea could help define a bottom to the current cycle which would make investors as well as companies happy.
We would be net buyers of the stocks.  For those more adventurous investors we think strategies such as long Lam, short Tokyo Electron might be an interesting pair trade.

Glasses and Open Architecture for Computer Vision

Glasses and Open Architecture for Computer Vision
by Bernard Murphy on 09-18-2019 at 6:00 am

Fisheye view

You know that AI can now look at an image and detect significant objects like a pedestrian or a nearby car. But had you thought about a need for corrective lenses or other vision aids? Does AI vision decay over time, like ours, so that it needs increasing help to read prescription labels and identify road signs at a distance?

In fact no. But AI-assisted vision, generally called Computer Vision (CV), trains on undistorted, stable images in decent lighting. Let’s pick those assumptions apart, one at a time. To get a nice flat-field image in front of (or behind) your car you could use multiple cameras with relatively narrow-angle lenses, studded along the fender. That would be very expensive and power-hungry. Or you could use a single camera with a very wide-angle lens (see above). Much better cost-wise but there’s a bit of a problem with distortion.

This is correctable through a process known as dewarping, a geometric transformation of the image and a process which is already well understood. Image stabilization is another familiar technique, correcting for the jitters in your hand-held camera or a GoPro on your helmet as you’re biking down a rocky slope.

There are fixes for these problems today, but these generally add devices or multi-purpose IPs, cost and more power consumption. That can be a real problem in consumer devices because we don’t like more expensive products and we don’t want our battery to run down faster. It’s also a problem for the sensors in your car. More AI processing is moving to the sensors to reduce bandwidth load on the car network, allowing sensors to send objects rather than raw images to the central processor.

CEVA and Immervision, a developer/licensor of wide-angle lenses and image processing technologies, announced a strategic partnership just last month. For a significant investment in Immervision, CEVA gained exclusive licensing rights to their portfolio of patented wide-angle image processing technology and software. CEVA has also licensed technology from as a part of this deal for better image quality and video stabilization.

(Incidentally, as a part of the same deal, CEVA also licensed Data-in-Picture technology which integrates within each video frame fused sensory data, such as that offered by Hillcrest Labs, also recently acquired by CEVA. CEVA seems to be putting together a very interesting business proposition in CV – watch this space.)

If you need a low cost, low power solution in a consumer device or a car or in many other applications, it makes sense to integrate these capabilities directly into your CV solution. That’s what CEVA have done with their just-announced NeuPro-S IP which bundles in the vision processing software. So you can have a single fisheye backup camera at low cost, low power and probably higher reliability than multi-chip solutions.

There are a lot of other interesting features in the NeuPro-S including integrated SLAM and safety-compliance for which I’ll refer you to the website link below. But there is one feature I thought worthy of special mention in this short blog. Multiple AI accelerators from multiple sources are starting to be integrated together in single-chip implementations. This raises an interesting question – how do you download training to all these accelerators? Standalone solutions per accelerator don’t look like a great solution.

CEVA have invested heavily in their CDNN deep-learning compiler mapping with optimization from the most common training frameworks/networks to inference networks on edge devices. The optimizations include advanced quantization algorithms (mapping from floating-point to fixed-point), data flow management and optimized CNN and RNN libraries to run on the edge.

Now CEVA have opened up the CDNN interface, through a feature they call CDNN-Invite, to support not only NeuPro and CEVA-X and XM platforms but also proprietary platforms, making support for heterogenous AI a reality on edge device while still keeping the simplicity of a unified compiler interface. I like that – open interfaces are almost always a plus.

You can learn more about the NeuPro-S HERE. (need link)

 

 


Learn About Implementing SmartNICs, an Achronix White Paper

Learn About Implementing SmartNICs, an Achronix White Paper
by Randy Smith on 09-17-2019 at 10:00 am

We have all seen the announcements to provide ever-increasing network capabilities within the data centers.  Enabling these advances are improvements in connectivity including SerDes, PAM4, optical solutions, and many others. It seems 40G is old news now, and the current push is for 400G – things are changing very quickly. These advancements focus on high-speed transmission of data within the data center. What has not been talked about as much is the extra burden that can be added to the processors themselves to manage all this traffic. What would be the point of connecting all of these blazingly fast processors if all their efforts only go towards talking to each other? Into the breach stepped “SmartNICs,” also known as intelligent server adapters (ISAs). These devices can offload a lot of these network-management tasks from the host CPUs.  A SmartNIC allows you to use those CPUs for meaningful work, not just networking and housekeeping.

SmartNICs have been in the discussion for several years now, though the name has been used more like a marketing term without a clear definition. One short definition is that a SmartNIC:

  1. Implement complex server-based functions requiring compute, networking and storage
  2. Support an adaptable data plane with minimal limitations on functions available;
  3. Work seamlessly with existing open-source ecosystems.
Figure 1: Traditional NIC vs. SmartNIC

As I said, this is the “short definition.” This topic is quite intricate. Fortunately, Achronix has now released a white paper titled, How to Design SmartNICs Using FPGAs to Increase Server Compute Capacity. The paper initially discusses the three forms of SmartNICs in use today – Multicore SmartNICs, based on ASICs containing multiple CPU cores; FPGA-based SmartNICs; and FPGA-augmented SmartNICs, which combine hardware-programmable FPGAs with ASIC network controllers.

As you will have noticed from the white paper title, Achronix uses FPGAs in its SmartNIC solution.  The reasons for this are the limitations inherent in a multicore SmartNIC design. Multicore SmartNIC designs usually include an ASIC that incorporates many software-programmable microprocessor cores. The cores used may vary, but these solutions are still expected to be limited for two reasons: (a) They are based on software-programmable processors which are slower when used for network processing due to a lack of processor parallelism; and (b) The fixed-function hardware engines in these multicore ASICs lack the data-plane programmability and flexibility that is increasingly required for SmartNIC offloading. Using multiple cores cannot achieve the parallelism gained from using numerous custom pipelines in an FPGA.

Figure 2: Adding a Separate QoS Engine to Manage SLAs

There are many combinations and layers of features available when building a SmartNIC with an FPGA. The Achronix white paper goes into these variants in detail. I found it particularly good at describing the architectural modifications to achieve specific features. The white paper focuses on the concept, architecture and implementation of a SmartNIC using FPGAs and is something that anyone with interest in this area should pick up. You will find a long list of white papers, including this one, provided by Achronix on the documentation section of their website. This white paper requires minimal registration information to access.

I felt I learned a lot going through this white paper as it contains so much information and examples. If you care about this topic, you should pick up a copy now.


Major Drone Attack Against Global Oil Production Showcases Weak Cybersecurity Thinking

Major Drone Attack Against Global Oil Production Showcases Weak Cybersecurity Thinking
by Matthew Rosenquist on 09-17-2019 at 6:00 am

Drones attacked an oil processing facility last week and shut down half of all Saudi capacity, representing about 5% of the world’s daily oil production. We have seen how a botnet of compromised home appliances can take down a sizeable chunk of the internet, control structures of electricity and other critical infrastructures are being hacked, and even life-saving medical devices are proving to be vulnerable to compromise.

Attacks with connected technology, Internet-of-Things (IoT) devices, and Industrial IoT components are ramping up, now attaining levels with serious consequences. It is time we revisit the deeper discussion of converged cybersecurity!

Cybersecurity is not just about computer viruses, hacking packets, stealing passwords, breaching databases, or ransoming files. It covers the evolving domain of security, privacy, and safety aspects for innovative digital technology. With the integration and transformation of the world’s growing digital ecosystem, cybersecurity becomes even more important to keep society safe and preserve the continuity of our daily lives. Emerging risks pose a serious threat and must be managed across the scope of intelligent devices, as they have deep potential ramifications.

Malicious threats can use connected technology to promote their agendas and conduct a wide range of harmful attacks. Drones are one aspect, as we have seen commercially available products disrupt airports, attempt to assassinate political leaders, conduct unauthorized surveillance, and transport illicit drugs over borders.

Drones as Weapons

The successful drone attack against Saudi Aramco likely used high-end commercial or low-grade military devices, but earlier attempts were reported with consumer and commercial level drones. Regardless if it is a few big payload devices or a swarm of smaller commercial level drones, serious damage can be inflicted. In the past, Saudi Aramco suffered one of the biggest hacking incidents on record where a crushing attack on their computer infrastructure destroyed massive amounts of data which shut the company down for an extended period.

Unfortunately, this only the beginning. Imagine if the drones and computing attacks happened in coordination. Hackers might manipulate industrial IoT control surfaces such as valve manifolds, petroleum processing equipment, and storage pressurization to prepare for maximum damage. They could then disable safety overrides and fire suppression systems while drones move in to initiate kinetic damage. These facilities are basically big chemical plants with highly flammable contents. Such synchronization could greatly amplify the effects, resulting in a massive impact.

The cybersecurity strategy community predicted such tactics and many more issues across this space, including downing airliners, destroying power grids, terror attacks on crowds in large public gatherings, assassination attempts on political and religious leaders. The list goes on.

The fictitious Slaughterbots video released two years ago highlights some disturbing possibilities along these lines. Much of the core technology capabilities does currently exist for killer drones, although to my knowledge it has not yet been assembled with lethal payloads and coordinated to operate in an AI swarm configuration. It is just a matter of time until some group does take the next step.

Looking back over the last three years, many of us in the cybersecurity world predicted drone attacks. Some paid attention and a few forward-thinking companies started developing countermeasures, but by and large, most of the market ignored the warnings. This lack of interest has opened a window of opportunity for attackers. Although funding has been scarce, it is fortunate that some innovation continued.

On the consumer side, the tools to control errant or malicious drone use has made progress. Many techniques have been explored, from birds of prey (yes birds) plucking drones from the sky, to projectile nets, signal jamming, and navigational electronic interference. Hobbyist drones are easier to counter because of their limited range, but there are some commercial drones that can travel 10+ miles with a payload. The drones that are being used to damage targets in Saudi Arabia, which struck the oil refinery this week, are likely larger low-end military designed units (X-UAV or Qasif types) with much greater range and destructive capacity. Those represent a far different challenge but are now part of the scope that organizations and governments must contend with. More sophisticated detection and eradication systems are beginning to make their way to market in limited numbers.

The Bigger Problem

The problem is not limited to drones, but rather the combination of all the technology that is connected. Innovation is pushing digital functionality and enabling new device features for automation, accessibility, and remote operation.  As we hand over control to autonomous devices, such as cars, buses, and planes, we then put the safety of drivers, passengers, other vehicles, and pedestrians at risk.

Upgrades to major industrial facilities opens risks of being compromised which could lead to industrial accidents such as chemical spills, fires, and water contamination. With major critical infrastructure elements being automated and accessible remotely, the foundations of our society are put in jeopardy. Electricity, water, sanitation, food distribution, emergency services, healthcare, and communications are at significant risk.

Many targets, beyond industrial and manufacturing, will likely be considered by attackers. Airports, shipping vessels, major sports/entertainment stadiums, political gatherings, government leaders, transportation infrastructure, electrical networks, fresh-water plants, etc. are all potentially at risk from connected technology. Attackers may be able to tamper with or destroy systems, distribute harmful materials, interfere with services, or violate the privacy of citizens.  We should expect that violent groups will use whatever tools and techniques necessary to reach across the globe as it suits their needs.

Welcome to a New Era in Human Conflict

A synthesis of digital and physical tools that will be leveraged across the spectrum, from traditional open combat, asymmetric and guerrilla warfare, terrorism and religious extremism, citizen revolts, and low-intensity conflicts such as political protests and suppression.

Cybersecurity is needed, in conjunction with traditional physical security, to manage evolving risks. It is imperative we recognize the global strategic challenges and work together to lay the necessary foundation for strong security defenses and trust in future technology. Only looking at yesterday’s risks or today’s crisis is NOT enough. We must have the vision and courage to look forward and maneuver to manage the risks of attacks in the future.


Automatic Documentation Generation for RTL Design and Verification

Automatic Documentation Generation for RTL Design and Verification
by Daniel Nenni on 09-16-2019 at 10:00 am

Ask any hardware or software engineer working on a product, and they will tell you that writing documentation is a pain. Customers have high expectations for user manuals and reference guides, usually requiring a team of technical writers to satisfy their requirements. In order to meet time-to-market deadlines, documentation must occur in parallel with the later stages of product development. Every time that a user-visible feature changes, the documents must be updated. Even after product release, constant refresh is needed. Software adds new functionality with every release; hardware changes less frequently but chip variants are common and embedded software updates in system-on-chip (SoC) designs may add new features.

Software development teams have tried to tackle this challenge by developing tools that could automatically generate at least some parts of the documentation. The structure of the source code provides some guidance; this is generally augmented by including in the code special comments (sometimes called pragmas) that have meaning to generation tools. As a simple example, a “case” statement might define the available opcodes for a processor. A pragma might flag this statement as relevant for documentation and a tool might extract the description for each opcode from comments embedded in the source code.

There are many documentation generators available; Wikipedia lists more than twenty of them. All those listed are focused on programming languages such as C, Java, Perl, and Python. In fact, Wikipedia defines a documentation generator as “a programming tool that generates software documentation.” But what about hardware designers writing SystemVerilog code? Surely, they would like to be able to generate documentation for their chip designs as well. To find out if there is a solution, I turned to Cristian Amitroaie, CEO of AMIQ EDA. His company took the software notion of an Integrated Development Environment (IDE) into the hardware world with their Design and Verification Tools (DVT) Eclipse IDE. They also expanded the idea of code linting, also originally for programming languages, and developed their Verissimo SystemVerilog Testbench Linter.

It turns out that AMIQ EDA has also adopted the idea of automated software documentation generation and created a tool specifically for hardware design and verification engineers. Their Specador Documentation Generator handles source code written in SystemVerilog, Verilog-AMS, VHDL, and the e verification language. As I expected, it can document chip designs, useful both for chip-level products and for larger systems whose functionality is largely determined by the chips within. Specador also handles the verification constructs of e and SystemVerilog, so it can document testbenches as well. This is clearly valuable for IP providers who offer the verification environment as a product, but it is also useful to document testbenches for internal design blocks. New team members or future projects that reuse the code will benefit from high-quality documentation.

I have to say that Specador produces professional-looking results, with different fonts and styles, hyperlinks, and even generated diagrams. I asked Cristian to share some of the secret sauce of how this all works. As with the other AMIQ EDA products, the fact that they compile the source code is the key to their understanding of it. Some documentation generators just skim the code looking for pattern matches. Specador compiles the code, so it identifies elements such as functions, modules, classes, and ports without the need for special comments or pragmas. This saves a lot of time for the engineers and makes it possible to get reasonable documents with minimal setup effort.

Even if the code is poorly commented, the generated documentation is accurate and detailed.  It shows design and verification hierarchies, port and function tables, and more, all hyperlinked to simplify navigation. The documentation is also linked with the corresponding source code and remains up to date as the source code evolves. More advanced users can define “comment processors” that extract additional semantic information from comments.

As we have noted before, SystemVerilog is a particularly complex language, with many extensions to both traditional RTL syntax and software programming languages. Specador includes support for all the powerful features of SystemVerilog, including coverage, assertions, and object-oriented programming (OOP). This leads to deep knowledge of the design and testbench, enabling better documentation with specialized diagrams for block schematics, state machines, instance trees, inheritance trees, and more. These diagrams are smart; for example, a user can click on a class in an inheritance diagram and jump to the point in the manual where that class is documented or step down/up into the design hierarchy.

As with software documentation generators, Specador provides value throughout the project, and beyond. Whenever the code is changed to add functionality or fix a bug, documentation is simply re-generated. Design, testbench, and documents remain in sync. I’ll say the same thing about Specador that I did for DVT Eclipse IDE: it’s hard to imagine any team designing or verifying RTL without this product in their toolbox. Please join me in thanking Cristian for his insight and for developing tools useful every day on real-world hardware projects.

To learn more, visit https://dvteclipse.com/products/specador-documentation-generator.

Also Read

An Important Next Step for Portable Stimulus Adoption

With Great Power Comes Great Visuality

Renaming and Refactoring in HDL Code


More Actel Foundry Woes: Andy Grove and Intel

More Actel Foundry Woes: Andy Grove and Intel
by John East on 09-16-2019 at 6:00 am

The foundry problem continued to plague us at Actel.  We had a really complex process! But  —- we needed state of the art feature sizes if we were to compete with Xilinx.  TI and Matsushita had been doing a good job for us, but not in fabs with state of the art technology.  We were two process generations behind! At two generations behind,  we had no chance to compete in density.  Xilinx flat out had bigger FPGAs than we did.  Competing in cost and speed was no picnic either. How could we get someone with a state of the art fab to agree to make wafers for us?   One day Bill Davidow and I were brainstorming. Bill said,  “Hey,  why don’t you meet with Intel and offer them a deal”.  You offer to give them rights to use your programmable technology in exchange for them giving you foundry services out of their best fab.  That way you’ll not only gain a secure foundry partner, but the process you get access to will be the best in the world.

Intel was the technology leader at the time.  Bill was right.  That deal would have given us access to the world’s most advanced technology.  A huge win for us!!!  But I was skeptical.  It wasn’t clear to me that Intel would want the rights to our antifuse technology.  Using our custom process with its extra masking,  implant,  and deposition steps (not to mention the high voltage requirements) would raise the cost of their wafers and hence the cost of their microprocessors.  Still it was worth a try.  Bill knew Andy Grove (the Intel CEO) well.  They had worked together for many years.  It was easy for Bill to get me a lunch meeting with Andy.

I called Andy and asked him where he’d like to have lunch.  He said, “Right here in the Intel cafeteria”  I asked him why he wouldn’t rather go to a nice restaurant.  His answer:  “There’s a parking problem at Intel.  Not enough parking spots.  If I take my car out of the lot to go to lunch,  I won’t be able to park when I get back”.  I asked him if he had a “reserved for the CEO” parking spot.  He told me, “no”.  He told me that the parking protocol was, “The earliest arrivers get the best parking places”.  So  — if he wanted a good parking spot,  the only way to get it was to go to work early and then not move the car at lunch.  I liked that system.  It’s the same one we used at Actel.  I told him that I would be happy to meet him at his cafeteria.

Andy Grove was a very, very down to earth guy.  When I met him in the Intel headquarters building everyone around him was looking good in their suits, white shirts and silk ties. When Andy came down the stairs though, he was wearing jeans and an old, ugly sweater. We went into the Intel cafeteria. Then we waited in line with everyone else. He paid for both our meals. Then we found a table that was a little isolated.  (There was a circle of mostly empty tables around us.  It was a little like the Korean DMZ.  Nobody wanted to get too near to Andy.)

Intel had the best fab technology. I wanted to be able to use it! The hard part was figuring out exactly what we could give Intel in return that they would value but wouldn’t put them in direct competition with us. I thought I had it figured out. I had prepared a thick binder full of the details of my proposal and all the benefits that would accrue to Intel if they took us up on our offer. I was proud of my work!!!  When we sat down I pulled out the binder and started to open it.

But … before we got down to business Andy wanted to talk about AMD. AMD and Intel had gone through some very rough legal battles over rights to the Intel processors. AMD maintained that they held certain rights to those processors due to an agreement that the companies signed in 1982. Intel maintained that the agreement was inoperative because AMD hadn’t held up their end of the bargain. The legal battle had been very bitter and, in fact, one of the reasons that I left AMD. Andy pretty much hated AMD and everyone who had ever worked there.  (But maybe not quite as much as some of the AMD people hated him  — it had been a very,  very bitter fight). Before I left AMD, I had been running AMD’s microprocessor division   — the group that Andy hated most.   Andy was known to have a quick temper and to be extremely confrontational.  So  —   yes!  I was nervous! (Here’s a good place to insert a joke about a long-tailed cat in a room full of rocking chairs)

Before we talked about foundry, Andy wanted to get my views on what the AMD people really thought about what had gone on —  What I thought about Jerry (Sanders) —  What I thought about Tony (Holbrook)  — What they thought about the battle.  He quizzed me at some length. I had left AMD several years before I met Andy, so was mostly able to get away with pleading ignorance. (Of course,  I wasn’t really ignorant.  I knew exactly what they thought.  It wasn’t pretty!!!)  Finally he apologized for taking time away from my meeting purpose and asked me what I had in mind.  I whipped out my massive binder, turned to page one, and started to take Andy through it. He stopped me. He reached over and closed the binder.

Andy: “John. In 25 words or less, what is it you want from Intel?”

Me: “Fab capacity on your advanced line.”

Andy: “John. You’re a good guy. I like you. So I’m going to offer you a choice.”

Me: “Great. What’s that, Andy?”

Andy: “I have a large staff of MBAs who came from really impressive schools. They work on these kinds of proposals for us. They’re top notch. If you’d like me to, I’ll give them your proposal and ask them to study it thoroughly and provide a well-reasoned, written response. That will probably take them a month or so. I’m quite sure their answer will be no. The other option is that I can tell you no right now and save you from having to wait a month. Which way would you like to go on this?”

There was no beating around the bush when you were dealing with Andy Grove!

Next week:  going public

Pictured:  Andy Grove

See the entire John East series HERE.

# Bill Davidow, Andy Grove, AMD, Xilinx,  TI,  Matsushita, Microchip


Radio: Relevant, Unifying, Intimate, Vulnerable

Radio: Relevant, Unifying, Intimate, Vulnerable
by Roger C. Lanctot on 09-15-2019 at 10:00 am

It was hard to escape the notion while attending RadiodaysAsia2019  that the world is experiencing what can only be called “peak radio.” Radio has the widest audience reach of any content delivery medium anywhere in the world, with the possible exception of India, according to researchers such as Nielsen Media, Gfk, Rajar, Edison Research and others. And Radio is the most trusted medium, according to research from the European Broadcasting Union.

Digital radio (in the form of HD Radio, DAB and DAB+) is unfurling across Europe, North America and the rest of the world broadening and enriching the trove of content emanating from broadcasters around the world. The shift to digital is further transforming the medium adding visual elements and information services while enhancing the quality of the signals.

This increasingly digital and visual medium is also now searchable and manageable thanks to metadata which is being implemented station by station – while receivers are steadily being updated and upgraded. In fact, the Internet has allowed radio to reach into every nook and cranny of listeners’ lives  even as standalone radios have begun to disappear.

Radio Futurologist James Cridland, keynoting the event, noted further that radio creates communities and unifies listeners rather than dividing and inflaming audiences the way social media like Facebook, Twitter, and Youtube have perverted elections in the U.S., Europe, India and elsewhere. And, Cridland added, radio won’t violate your privacy.

Radio also won’t promote fake news. Researchers Lucile Stengel and Sapna Solanki of the BBC shared the results of a study published late last year on the impact of social media and fake news on elections in India. Suffice it to say, radio was not implicated.  The study can be found here: https://downloads.bbc.co.uk/mediacentre/duty-identity-credibility.pdf

Why, then, since radio appears to be riding a wave of media domination is there the eerie sense of competition closing in on the industry? It so happens that the Internet giveth and the Internet taketh away.

With the boost in audience reach that the Internet has enabled for radio (creating new ways to connect and interact with listeners) has come competition for the ears of listeners from music streaming services, podcasts, and Youtube. The latest rumor is that Facebook is poised to enter the streaming business as well.

The most feared competitive phalanx is FAANG – so-named by Julie Warner, Events Director for Commercial Radio Australia – Facebook, Apple, Amazon, Netflix, and Google. These organizations are targeting the ears, eyeballs, and wallets of radio listeners and putting pressure on broadcasters with billions of dollars in advertising at risk.

Nowhere is the onset of FAANG more notable than in the automobile, where as much as 50% of all radio listening or maybe more is occurring throughout the world. Amazon, Apple, and Google, in particular, are seeking to close the gap that the automobile creates in the otherwise comprehensive view they possess of consumer behavior in the form of search, transactions, and daily interactions.

The car is a browser on wheels operating largely off of the grid from normal search and information resources. The car is a hole in the broader search marketplace – worth $100B to Google. It is a vacuum and FAANG abhors this vacuum.

Car companies are seeking to capitalize on their privileged position as controllers of all that occurs in the car. The radio industry, too, is hoping to preserve its prime real estate perched in the center of the dashboard.

Toward that end Michael Hill, Managing Director of Radioplayer Worldwide UK spoke at RadiodaysAsia about the importance of adopting and deploying digital assets to enable rich in-vehicle digital experiences that are radio centric. In that vein, Radioplayer has introduced a reference design capable of integrating streaming services with terrestrial radio along with helpful visual elements designed to break radio out of the bonds of ancient “radio dial” interfaces into a more non-linear experience.

Companies like Zenon Media and Sheridan Broadcasting are taking radio further offering tools to integrate video content with radio’s audio content for both Internet-based delivery and in-vehicle rendering – perhaps in the frontseat? Anything is possible – especially now that Tesla Motors is tipping its intention to introduce Netflix and Youtube in dashboards.

The world is experiencing peak radio, as evidenced by research and insights shared at RadiodaysAsia2019. But preserving this leadership will take innovation and collaboration – particularly with the automotive industry. Unlike previous Radiodays events, RadiodaysAsia was missing a stronger representation of the automotive industry. Auto makers must tune in to the changes sweeping the broadcast industry.

Radio already delivers a location-relevant platform including news, weather, sports, traffic, and advertising with curation that creates an intimate experience for listeners throughout the world. The onus is on car makers to bring that experience to life in the latest digital dashboards. Today’s radio is digital, searchable, non-linear, intimate, community-creating, and personal. There’s nothing else quite like radio and it’s better than ever.


WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®

WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®
by Randy Smith on 09-13-2019 at 10:00 am

I recently wrote about a ClioSoft® study with Google on using cloud platforms for EDA design and the importance of using persistent storage when doing that. ClioSoft will again be sharing important information on design productivity in the upcoming webinar, Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®. SemiWiki will hold this webinar featuring ClioSoft on Tuesday, September 24, 2019, from 10:00 am to 10:45 am. You can reserve your space here with your work email address.

Over the past decade, the importance of design reuse has become not simply more important, but a mandatory requirement. You cannot design today’s chips, with their tremendous transistor counts, from scratch. There is not enough time. The lack of time to design from scratch is even more clear when we look at analog design. If you have a piece of analog functionality already working in a certain process, you should never redesign it without a very good reason; otherwise, you are simply wasting time and resources. It is not free to move analog IP between process nodes, but it is usually better than “reinventing the wheel.” So, if you are going to look for reusable IP inside your company, where do you start?

ClioSoft announced designHUB® in May 2017. It made its debut at DAC a month later. Since then, many companies have adopted designHUB to reuse their internal and licensed third-party IPs. With ClioSoft’s integration with so many EDA vendors, the adoption of designHUB has been increasing dramatically. This webinar will focus on using designHUB for analog design with Cadence’s Virtuoso.

Sourcing IP can be a time-consuming part of the design process. There is a point at which, if you have not selected the IP you are going to use for a specific function, the design process halts. When selecting IP, there are many things to consider:

  • Basic functionality – What does this IP do?
  • Specifications – In which process was it used? How fast is it? How big is it? How much power does it consume?
  • Features – Does this IP support all the features you need?
  • Usage – How often has it been used? In what types of designs?
  • Cost – Especially in the case of a previously licensed IP, it may not be free
  • Quality – You need to know the previous usage of the IP, and how well it has performed
  • Support – Is there an in-house expert who has used this product that can answer your questions? How do you learn about fixes and workarounds?

Managing and sharing all this information across your company is what designHUB provides. You enable your design community to crowdsource their IP using search mechanisms. You can also set up workflows for addressing the permissions necessary to utilize a given IP. Each design may now more easily share their IP with the rest of the company design community, improving design quality, and reducing design costs.

The webinar will be moderated by Dan Nenni,  Founder of SemiWiki. The presenter will be Karim Khalfan, Vice President of the Application Engineering group at ClioSoft. Karim has led the deployment of ClioSoft’s SOS7 design data and IP management across the semiconductor industry. He has written several articles and white papers on SoC design data management solutions. I have known Karim for more than a dozen years, and he is a friendly, funny, and smart person you will enjoy hearing him speak. Karim has received his Bachelor of Science degree in Computer Science from the University of Texas and holds a patent on defining a universal data management adapter to be used for integration with any EDA tool.

Be sure to sign up using your corporate email address now – here.

About ClioSoft Inc.

ClioSoft® is the pioneer and leading developer of enterprise system-on-chip (SoC) design configuration and IP-management solutions for the semiconductor industry. The company provides two unique platforms that enable SoC/IP design-management and reuse.

The SOS7 platform is the only design-management solution for multi-site design collaboration for all types of designs – analog, digital, RF and mixed-signal. The designHUB® platform provides a collaborative IP reuse ecosystem for enterprises.

ClioSoft customers include the top 20 semiconductor companies worldwide. The company is headquartered in Fremont, CA with sales offices and distributors in the United States, United Kingdom, Europe, Israel, India, China, Taiwan, South Korea and Japan. For more information visit www.cliosoft.com

Also Read

For EDA Users: The Cloud Should Not Be Just a Compute Farm

IP Provider Vidatronic Embraces the ClioSoft Design Management Platform

56thDAC ClioSoft Excitement


Chapter Ten – Design Automation for Systems

Chapter Ten – Design Automation for Systems
by Wally Rhines on 09-13-2019 at 6:00 am

Electronic design automation has evolved to an extent that the complex chips with tens of billions of transistors frequently produce first pass functional prototypes from the manufacturer.  What makes this so incredible is that such a small portion of the possible states of electronic operation are actually tested in the simulation of the chip.  Figure 1 takes the example of a very simple electronic function, a 32 bit comparator, that compares two thirty-two bit numbers and determines whether one of them is equal to, less than or greater than the other.  One might naively assume that this requires 2^32 comparisons of the two numbers.  It doesn’t.  If it did, then a caveman who was given one of today’s state of the art computer servers 565,000 years ago would just have completed the calculation.  EDA history is made up of innovations that preempt the need to check every possible state of an electronic circuit, or 100% of the state space as design practitioners would say.

Figure 1.  Simple comparison of two 32-bit numbers would require 565,000 years with a state-of-the-art computer if each possible pair of numbers had to be compared

The question then arises, “if we can reliably simulate the behavior of chips with billions of transistors, can we extend the technology to more complex systems like cars, planes and trains?”  Or, if we can do this for the electronic behavior of a chip, could we extend it to the mechanical, thermal, aerodynamic or other simulated behavior of a complex system? Inverse reasoning suggests that the answer is “yes”.  The reason is that the electronics of systems like cars and planes are becoming so complex that, if we can’t automate the design and simulation, there is no other known solution.  Humans certainly can’t analyze the complexity of such a system (Figure 2).

Figure 2.  Electronic and wiring complexity of a 2014 S-Class Mercedes

It has taken sixty years to evolve the software to accurately simulate the electrical behavior of chips.  How long will it be before we can do the same for an entire car or plane?  And how will cars and planes be designed in the meantime?

For the automotive and aerospace industries, mechanical design simulation and verification evolved long before electronic simulation.  Dozens of mechanical computer automated design, or CAD, companies emerged in the last thirty years. Today simulators that model most of the mechanical design, as well as the manufacturing processes to produce them, are available from companies like Siemens, Dassault and Parametric Technologies.  These simulators also analyze aerodynamics and thermal effects.

It’s just in the last three decades that the electronics in cars and planes have increased in complexity to such a level that humans can no longer manage the data required to create an optimized, cost effective design without errors (not to mention protections against hacking).

It’s easy to assume that the design of a car you buy has been verified by driving prototype cars for thousands of miles in all types of weather conditions.  It probably has.  Before a manufacturer can build that prototype, extensive verification must be performed.  How is that done?  It all comes down to a methodology called “abstraction”.  Requirements for the design of a vehicle are described at a high level and then refined to provide greater detail.  Each level of abstraction of the data is analyzed on a computer or with a physical prototype of a subsystem.

The same is true of integrated circuits.  Figure 3 shows the various abstractions used to describe, simulate and verify the performance of a chip.

Figure 3.  Four “abstractions” used in the design of integrated circuits

Although relatively new, ICs are increasingly being described in a high level language like C++.  This description is relatively compact so simulations of the entire chip, or the critical performance portions of it, can be run quickly.  That description is automatically “synthesized” into the next level of abstraction called “RTL” or register transfer logic that is described by a language such as Verilog, VHDL or System Verilog.  This level of abstraction is much more detailed, describing logical operations of the chip.  Simulations of the full chip typically take up to twenty-four hours so the building blocks of the chip are rigorously simulated before integrating them step by step until the whole chip that  can be simulated.  Once the designer is satisfied with the RTL simulation, the database is synthesized into a description of the actual logic gates creating what is called a “net list”.  The design is synthesized into a description of the physical layout of the transistor on the silicon and then transformed into a language (GDS2) that the photomask generator can understand and can convert into the actual photographic negative that is used to manufacture the chip.

System design has evolved a similar design approach but systems engineers refer to it as the “V Diagram” (Figure 4). A difference between the “V” approach and that used

Figure 4.  System “V” diagram showing the path from high level abstraction to greater detail followed by integration and verification at each level of abstraction

by IC designers is that the system designer is likely to build a physical prototype of each subsystem once the design is refined to the level of a physical description.  That prototype can then be tested by inserting it into a laboratory mockup of the entire vehicle using what is referred to as “hardware in the loop” testing. Integration testing can also be performed with hardware in the loop but increasingly those subsystems are tested in a “virtual” environment where the parts of the vehicle that are connected to it provide inputs and react to its outputs in a simulated virtual environment.

This whole methodology is being disrupted because of growing complexity.  Once we begin to develop truly autonomous vehicles, the approach will become totally inadequate because the number of tests that must be performed exceed the capability of physical testing (Figure 5).  To test an autonomous drive vehicle would require more than eight billion miles of driving, according to Akio Toyoda, CEO of Toyota.

Figure 5.  More than 8 billion miles of driving would be required to physically test an autonomous vehicle.  Instead “virtual” verification must be adopted

A manufacturer would have to send out a fleet of 300 cars, driving at 60 mph for fifty years.  Not very practical for introducing a new model each year.

Another reason that automotive and aerospace design must become virtual is that optimization has become too complex.  Consider the wiring alone.  With more than 1.5 miles of wiring in a car, forty miles in a small business jet and over one hundred in a commercial aircraft, there is a critical need to analyze tradeoffs among variables like weight, cost, performance, signal integrity, etc.  Finding an optimum combination is far beyond the ability of the human brain. The same can be said for optimizations of the electrical subsystems, called electronic control units or ECUs in a car, or line replaceable units or LRUs, in an airplane.  These ECUs contain multiple chips and embedded software to handle processing such as control of brakes, transmission or engine ignition.  They are complex enough to require simulation to assure that the inputs and outputs perform as specified.  The additional opportunities for problems arise when the ECUs are tested in a system environment.  Even if an automotive OEM were lucky enough to produce a functioning car without a virtual simulation, debug of future problems would be difficult or impossible without a simulation.

Modern cars contain up to one hundred million lines of software code.  It’s safe to assume that this code will contain bugs.  The challenge for the automotive OEM is to find a way to react quickly and update the software in every similar vehicle on the road when a bug is discovered.  Otherwise, the OEM could be liable for all accidents that occur once the bug is known.  Tesla has developed an infrastructure to make this possible.  The other challenge is to design the car in such a way that mission critical systems can be isolated.  Many of the most publicized hacks of vehicles have come from intrusion of the vehicle through the infotainment system that is tied to the CAN bus, giving access to more critical systems like the brakes, transmission and engine.

How long will it take until automotive OEMs design the entire vehicle, as well as the assembly line for building it, in a totally virtual environment on a computer.  The industry is farther along than you might think.  Most of the mechanical design and manufacturing operations are already done that way.  The remaining challenges include much of the electronics.  That’s why Siemens, who provided software for all aspects of mechanical, aerodynamic, thermal and manufacturing simulation, decided to acquire an EDA company, Mentor Graphics.

System simulation of the electronics, as well as testing and optimization that involves “cross domain” testing among electrical and mechanical systems, remains very challenging.  Wiring architectural tradeoffs and automatic generation of the design of the wire harness is essentially automated today.  Automation of design and verification of other vehicle electronics will require development of abstractions that can be used to analyze multiple ECUs operating in concert with one another as embedded software is executed in the vehicle.  The abstractions must be at a high enough level that they can be simulated at  something like 100X the real time execution but be detailed enough that an engineer can analyze the inner workings of an ECU to find a design bug or test an optimization alternative.

How long will this take?  Not that long.  It has to happen over the next decade or two or we won’t be able to design the next generation of cars and planes.


Synopsys is First IP Provider with a Complete CXL Implementation Available

Synopsys is First IP Provider with a Complete CXL Implementation Available
by Randy Smith on 09-11-2019 at 12:00 pm

Synopsys just announced the availability of their IP solution supporting CXL (Compute Express Link). This new protocol is going to be an important component for several applications expected to be shipping starting in 2021. CXL is an alternate protocol that runs on the same physical layer as PCI Express (PCIe). Among other usages, PCIe is the protocol running over the expansion slots on all PCs. Other standards have been written on top of the PCIe electrical interface including the laptop expansion card interface ‘ExpressCard’ and the serial computer storage interface SATA Express. In data centers, many applications have become based on special hardware plugged into PCs via the PCIe slots on the motherboards. Those specialized applications to some extent have been held back by the limitations in the PCIe protocol. CXL is the new standard to address the needs of these new applications while maintaining backward compatibility with PCIe.

We have all heard of the explosion in machine learning and artificial intelligence. These solutions are predominantly based on either GPU or FPGA accelerators. There will soon be an onslaught of cards with application-specific processors from any number of different processor architectures to support applications such as image, facial, encryption/decryption, various video processing functions, storage class memory, voice recognition, big data analytics, and other capabilities that all depend on a fast host connection while running in the PCIe slots. With so much intelligence available in the expansion cards, more was needed from the interface protocol – specifically the sharing of cache and memory data between the host processor and the accelerator card’s processors. CXL addresses this for several type of systems by supporting low latency and cache coherency.

The most significant feature of CXL is that it uses three unique protocols – CXL.io which is used for configuration and data management, CXL.cache which enables an attached device to cache data from the host’s memory, and CXL.mem which allows a host processor to access attached memory in a CXL device using standardized transactions. These protocols allow the attached accelerators to work more cleverly and efficiently with the host processor, and potentially through the host processor cache, with other attached accelerators. Keep in mind that PCIe is a point-to-point connection model, not a bus model. Each of the attached devices has a dedicated channel to the host. The host processor manages coherency of data cached by the attached devices.

So why do I think that this will be important in 2021? Easy, future Intel CPUs will support PCIe 5.0 and CXL, beginning in 2021. In March, we heard that “Intel sees CXL as being an alternate protocol running over the PCIe physical layer. At first, CXL will use 32Gbps PCIe Gen5 pipes, but Intel and the consortium plan to aggressively drive towards PCIe Gen6 (and theoretically beyond) to scale.” In July, AMD also signed on to CXL. While AMD is also in other potentially competing consortiums, Intel is only backing CXL for the protocol in this part of the computing architecture. There are also many other prominent companies backing this standard. By market strength alone, I would expect it to win, but beyond that, it seems like a very efficient approach as well.

Synopsys has announced a quite complete CXL solution. The DesignWare® Compute Express Link (CXL) IP solution consists of a controller, PHY, and verification IP. Synopsys’ CXL IP solution is compliant with the CXL 1.1 specification and supports all three CXL protocols (CXL.io, CXL.cache, CXL.mem) and device types to meet specific application requirements. And, of course, CXL IP is built on Synopsys’ DesignWare IP for PCI Express 5.0. Most importantly, you can license and start designing with this solution now for products shipping in 2021, when we expect Intel to be shipping systems supporting CXL as well.