webinar banner2025 (1)

Computing with Light

Computing with Light
by Daniel Nenni on 12-27-2019 at 6:00 am

Evolution of programmable photonics

I recently wrote about this year’s Cadence Photonics Summit. As I mentioned in that post, it was a fascinating event with several companies providing useful and informative presentations. You can access some of the presentations on the event site. One presentation, given by Jose Capmany of iPronics, was especially interesting to me, so I will dive into it a bit.

Evolution of programmable photonics

The current commercial efforts to utilize photonics have typically focused (pun intended) on data transmission. Most of these efforts are utilizing the same pulse-amplitude modulation with four levels (PAM4) technology that is used in high-speed (e.g., SerDes) copper data transmission. But the field is growing much faster than this as more optical circuits are available. These functions—filters, delay lines, RF phase shifters, switches, MUXs, beamformers, arbitrary waveform generators, and optoelectronic oscillators—are enabling a new class of photonics, RF/mm photonics. One interesting circuit discussed in the presentation was the European Research Council’s ERC ADG 2016 UMWP Chip Project. As stated in the presentation, “The main objective of UMWP CHIP is the design, implementation and validation of a universal integrated microwave photonics programmable signal processor capable of performing the most important MWP functionalities featuring unique broadband and performance metrics.” This is a serious piece of engineering work.

Jose’s presentation took a bit of a diversion here, and at first, I did not understand where he was headed. He reviewed the history and evolution of the field-programmable gate array (FPGA). But then he went on to describe a new form of a gate array, the field programmable photonics gate array (FPPGA). The fabric in an FPPGA is not populated with look-up tables (LUTs). Instead, the fabric consists of reversible 2×2 unitary gates. These gates work with analog signals and unitary 2×2 matrix Algebra U(2). Reversible gates are built by transforming the Pauli Matrices, which are as well known in quantum information as in quantum computing (QC)!

FPPGA Basics

Some call QC the study of a non-classical model of computation. That, to me, is an over-simplification. It deals with functions that transform states rather than using binary math and traditional logical operators.

If you attended SEMI this summer, you had a chance to see at the IBM Quantum Computer that was on display there. It looks like an exotic piece of hardware. But how do you program it? It has its challenges, as Bernard Murphy pointed out in blog early this year, Contrarian Views on Quantum Computing. But QC holds great promise in cryptography, simulation, simulated annealing, solving linear equations, and more. So QC is not going away. It will just have to chip away at the digital logic design, algorithm by algorithm, and that will take a long time before it becomes more widely adopted. However, it is very likely to capture some niche markets very quickly with its higher efficiencies for certain problems.

Cadence, in its collaboration with Lumerical, has delivered the photonics design tools to support the implementation of photonics designs (PICs) now. No need to wait for that. I hope to see the day when Cadence also produces a QC simulator, or maybe Lumerical — we will just have to see.


Cryptocurrency Exchange Hacks are on the Rise

Cryptocurrency Exchange Hacks are on the Rise
by Matthew Rosenquist on 12-26-2019 at 10:00 am

Seven major cryptocurrency exchanges were victimized in 2019, totaling over $160 million in financial theft. As predicted, cybercriminal hackers targeted crypto exchanges in 2019 and the trend will continue into 2020.

Crypto exchanges are relatively new, as compared to those in the traditional financial markets. It is a hotbed of competition which drives innovation and is attractive to criminals. Over 400 cryptocurrency exchanges exist and all are vying for a piece of the growing $200+ billion market. New features and updates are constantly modifying the software and technology infrastructure. Over six thousand unique digital coin and token assets exist and the scope of management complexity continues to grow for these online markets. With constant change, vulnerabilities are inadvertently introduced.

Many of the exchanges have not matured, from a cybersecurity perspective, to properly validate, maintain, and defend their online services. Most of the sites focus on maintaining services and growing the user-base, with little attention to security.  The race to establish themselves and be competitive has blinded them from investing in the necessary cybersecurity controls. In comparison, the brick-and-mortar banking sector is well versed in the risks of cyber-attacks. With decades of experience, they spend an inordinate amount higher than other industries on security.

Wherever there is value, the risk of theft exists. Digital tokens and coins are different than dollars and government-issued currencies, but they have value and can be transformed into just about any desirable form of money on the planet, which makes them a desirable target.

Additionally, the risks of being caught are small. Crypto assets can be easily stored, hidden, transferred, and laundered. Law enforcement’s effectiveness is less than optimal and not a significant deterrent. Their tools lack refinement, international cooperation is weak, and cybercrime laws are poorly defined. Investigation and recovery of crypto assets are problematic at best, which increases the lure to attackers. Improvements and new capabilities for pursuing criminals in the digital landscape are being made, but progress is slow.

The combination of significant wealth, online accessibility, numerous vulnerabilities, and a plausible exit strategy for stolen assets makes for attractive targets. The result is that cybercriminals are beginning to explore and invest in targeting cryptocurrency exchanges, where vast amounts are consolidated in one place. The results have been staggering, with some hacks netting over $40 million to the digital thieves.

  1. Upbit             $49M  November 26th
  2. Bitpoint          $32M  July 12th
  3. Bitrue             $4M    June 27th
  4. Binance          $40M  May 7th
  5. DragonEx       $7M    March 24th
  6. Bithumb         $13M  March 30th
  7. Cryptopia       $16M  January 15th

The successful heists embolden and encourage more to attempts to target this industry. Until the cybersecurity measures increase to align with the threats, attacks will continue to rise and a wider range of targets will fall victim. It is a self-reinforcing cycle.

I predict 2020 to see even greater numbers of attacks and losses to the cryptocurrency exchanges, product vendors, service providers, and the holdings sector. Cyber criminals will find new ways to exploit, defraud, and steal from the cryptocurrency ecosystem at a scale never seen before. This trend is here to stay for the foreseeable future.


AAA: Killer Automotive Safety Systems

AAA: Killer Automotive Safety Systems
by Roger C. Lanctot on 12-26-2019 at 6:00 am

AAA is out with a new study, conducted on its behalf by the Virginia Tech Transportation Institute, that purports to show, among other things, that advanced automotive safety systems may lull drivers into a false sense of security leading to distracted driving or worse. The takeaway from this impressively elaborate study is that car makers should take greater care and responsibility in deploying these systems and training dealers and drivers.

Understanding the Impact of Technology: Do Advanced Driver Assistance and SemiAutomated Vehicle Systems Lead to Improper Driving Behavior? – VTTI/AAA

AAA has increasingly built its auto safety brand on widening the awareness and mitigating the impact of distracted driving – estimated by the USDOT to take upwards of 3,000 lives on U.S. highways annually. For years, AAA led the charge against smartphone use while driving, going so far as to assert that even hands-free smartphone use was hazardous and should be sanctioned – based on studies of the cognitive load on drivers.

If you are starting to get the impression that the AAA is the last organization you want in the backseat of your car on your next long trip, then you are in tune with my sentiments. At the very moment that the automotive industry is being transformed by active safety systems designed to avoid collisions, keep drivers in their lanes, or alert drivers to objects in their blind spots – AAA is sounding the alarm that drivers may be becoming over-reliant on these systems and taking their eyes off the road.

This AAA position is perfectly aligned with the Insurance Institute for Highway Safety which for years claimed that blindspot detection systems, lane keeping assistants, and automatic emergency braking solutions were failing to reduce claims rates because consumers were turning them off.  So you are damned if you do (turn them on) according to the AAA and damned if you don’t (turn them on), according to IIHS.

More importantly, these two great advocates of driving safety were speaking out against the proliferation of safety systems rather than embracing them and describing how they might be enhanced.

Car companies such as Subaru, Nissan, Ford, Volkswagen, BMW, Volvo, and Hyundai that have taken the lead in deploying safety systems across their vehicle line ups and leveraging safety in their branding ought to be recognized, praised, and rewarded for having done so. The reality is that these active safety systems are being deployed by auto makers in the absence of regulatory mandates and in recognition of the fact that consumers highly value safety in their vehicles and are willing to pay for it.

In fact, consumers are so willing to pay more for safety systems in their cars that most have ignored the fact that the propensity of auto insurers fail to provide insurance incentives for adding these systems – with some exceptions. Kudos to USAA.

The fact of the matter is that cars should not hit things! Car crashes are a product flaw and any technology designed to prevent crashes – such as lane keeping, blindspot detection, and automatic emergency braking – should be on a path to universal industry adoption.

The AAA/VTTI study makes note of a variety of valuable insights ranging from the differing levels of driving capability of study participants to the varying behaviors reflected in the process of developing familiarity with new safety systems. The study also identifies the various challenges associated with different types of user experiences and interfaces for indicating when systems are turned on or off and when and how alerts are communicated.

Of course, the study is based on cars currently in the market, meaning the results of the study are nearly useless or irrelevant in the context of constantly evolving automotive safety systems. Some of the systems on the road today have driver information displays that are either too small or hidden behind the steering wheel, or may lack audible cues to go with visible indicators. And there is an unfortunate lack of consistency between car brands.

(It’s worth noting the growing adoption of driver monitoring systems globally – including the recently proposed Euro NCAP percent eye closure standard – intended to ensure future driver attentiveness.)

The revolution of active auto safety systems washing over the automotive industry is arriving in the form of increasingly inexpensive camera- and radar-based systems capable of identifying roadway obstacles and anticipating their movements. Ever more powerful on-board processing technology is allowing safety systems to deliver the kind of collision avoidance capability consumers should expect.

Over the past 10 years automotive safety advocates from the National Highway Traffic Safety Administration to AAA have taken to blaming drivers for 94% or more of all vehicle crashes. For them, it’s nearly always the fault of the nut behind the wheel.

In the absence of fully-functioning automated driving systems to remove the nut from this proposition I believe it is reasonable to expect auto makers to do their best to enable their products to avoid collisions leveraging widely available technologies. Studies like the AAA/VTTI project are useful in identifying the scope of the challenge – but pointless for arriving at a solution.

The solution lies in enhanced user interfaces, increased on-board processing capabilities, and the proliferation of vehicle sensors. All of this could be aided by a coordinated effort within the insurance industry to reward drivers that adopt, pay for, and use these systems – and that includes rewarding the auto makers that develop and deliver the systems.

It’s going to be decades before we remove that nut from behind the wheel. It’s time that we, as an industry, did our utmost to help him or her out.


No Coal in This Stocking: VCs and Nuclear Fusion

No Coal in This Stocking: VCs and Nuclear Fusion
by Bernard Murphy on 12-25-2019 at 6:00 am

Is fusion energy close?

Tis the time of year when product pitches are 100% at consumers. No-one in their right mind wants to push the nerdy behind-the-scenes stuff we usually talk about. This is a chance for me to go off the rails a little and consider unusual directions in innovation. We know all about VCs underwriting self-driving cars, intelligent everything and anything, clouds, blockchain, and so on ad nauseam, but what about nuclear fusion? Everything we’re building these days needs electricity. How about non-polluting (in principle) and unbounded (ditto) power generation? Stack all unicorns on top of each other and they still can’t compete with a value proposition like that.

Fusion involves banging hydrogen nuclei together (there are variants) to create helium nuclei. This process generates net energy and powers stars. Nuclear fission also generates energy when uranium or plutonium nuclei break up into smaller and energetic bits. Fission works just fine as a power source but has us all worried about long-lived radioactive by-products and the harm they can cause. Fusion generates no such radioactive by-products (again in principle) and actually generates more energy per reaction than fission. Also its fuel is hydrogen. We have quite a bit of that around, in water. And fusion generates helium, not carbon. So far at least we have no issues with excess helium (maybe we’ll all start talking funny). Seems like a no-brainer.

Except it isn’t. We’ve been working on fusion reactors since the 1940’s, without a commercial reactor to show for it yet. You have to bang the nuclei together really hard to overcome electrostatic repulsion. And once they fuse together, you have to contain the energy to sustain continued fusion. Containing a plasma at hundreds of millions of degrees is far from trivial. But we’re still trying, certainly in universities and national labs. Lockheed and Microsoft Research have even got in on the act.

When government-funded research is struggling, maybe it’s time to encourage more private involvement. That’s the view of a number of VCs, apparently dreaming of trillion-dollar, Aramco-scale IPOs. There’s a good article in the Economist on private ventures in fusion: Commonwealth Fusion Systems (a spin-out of MIT), Tokomak Energy (a spin-out of the UK Atomic Energy Authority), General Fusion in Canada, TAE Technologies in California and First Light Fusion (a spin-out of my alma mater). Between them they have raised close to $1B in funding. Government programs inevitably have richer sponsors. The ITER reactor in Europe is already a $20B program and aims to be fully operational by 2045.

There’s a more detailed article on Commonwealth Fusion Systems, which has a much more aggressive goal – to be operational by 2025 (try getting VCs to underwrite a program that will start to deliver in 2045). They intend to start with a 50MW reactor and scale over time to 200MW, the kind of plant that could take the place of a wind or solar farm.

The CEO makes a good point – renewables will never be able to completely replace carbon-based energy sources. The numbers don’t work to scale to that level. But fusion just might. The risks associated with fusion are similar to those associated with regular industrial plants, nowhere near the risks we associate with fission. The carbon footprint will be tiny, though we have yet to determine if a helium footprint is something we have to worry about.

There’s an old joke that fusion is just 30 years away from reality, and always will be. AI was the butt of similar jokes until quite recently. Perhaps with all this public and private attention, fusion will become a reality sooner than we expected.


Avoiding Fines for Semiconductor IP Leakage

Avoiding Fines for Semiconductor IP Leakage
by Daniel Payne on 12-24-2019 at 10:00 am

Percipient IPLM

In my semiconductor and EDA travels I’ve enjoyed visiting engineers across the USA, Canada, Europe, Japan, Taiwan and South Korea. I’ll never forget on one trip to South Korea where I was visiting a semiconductor company and upon reaching the lobby a security officer asked me to take out my laptop computer, because he wanted me to issue the dir command at the C: prompt so that he could write down how many files were on my computer, and the exact number of bytes. After my visit and presentation with the customer, the same security officer checked my laptop again to make certain that there were no extra files on my laptop, keeping his facility safe from IP theft from a visiting EDA vendor. That got me to thinking about how semiconductor IP is used, shared and protected, because in the USA we have “the deemed export rule” where the release of a controlled technology and info to a non-U.S. person, is considered an Export.

Releasing sensitive IP to a non-U.S. person while not having a deemed export license is a violation, and the fines can cost you dearly, thousands to millions of dollars. One semiconductor company paid a $10M fine for violating export control laws in 2014.

When I designed chips at Intel we would share IP between design groups in California, Oregon, Japan and Israel, but all of our IP tracking was done by a simple email exchange, nothing really traceable and certainly not enforceable. So in the industry today we have the issue of IP leakage, which is simply IP that inadvertently is getting shipped to a country where it is not allowed. Here are four examples of IP leakage to consider:

  1. Access controls where anyone can view and download IP without any enforcement.
  2. Using tar balls to share IP.
  3. An IP block embedded inside other IP, but without any visibility or traceability.
  4. A traveling engineer unaware of IP restrictions in a visited geography.

The IP Lifecycle Management (IPLM) experts at Methodics are on top of this issue of IP leakage and have designed a tool called Percipient that helps engineers prevent accidental IP leakage. The approach with Percipient is to use a centralized, traceable management system with an admin sets up permissions at the very start of a project. Three levels of permissions are defined: Read, Write, Owner.

Percipient works with existing infrastructure like the Unix file system and your favorite Data Management (DM) system: Perforce, Subversion, Git, etc. All of your IC design data stays in its native format, and Percipient integrates with the data sources, connecting IP producers to IP users.

With this approach an engineer can quickly build a workspace using a native DM system, and each workspace is traceable and tracked, so no more email messages and manual methods to keep track of everything by hand.

IP often uses a hierarchy which includes many smaller IP blocks, however if one embedded IP block several levels deep is restricted then how would a user know about that at the top level?

The architecture of Percipient understands and preserves all of your IP hierarchy, so there’s never a chance of accidentally sharing a restricted IP block buried deep inside of any hierarchy.

The challenge of an engineer traveling to a new geography and then starting to build a workspace, inadvertently starting to use restricted IP needs to be addressed. In Percipient there’s a feature called ‘geo-fencing’, where there’s a self-managed IP Cache and fencing enforces a “Do Not Download” list for all IPs in the cache. An admin marks each restricted IP block. Here’s a diagram of how the “Do Not Download” feature is enforced:

 

In this methodology a user is blocked from loading any restricted IP for their geography, and the admin can show through traceability that no sensitive IP was accidentally leaked.

Summary

The semiconductor industry has spread worldwide, and yet the concern for protecting semiconductor IP remains a looming issue as ITAR (International Traffic in Arms Regulations) and Deemed Export Rules have steep financial penalties for those companies that are leaking restricted IP. Instead of using manual methods to track IP and risk the chance of IP leakage, why not use something like Percipient that helps to automate and enforce IP reuse in a safe, legal manner.

In this blog I’ve summarized the features and methodology used in the Percipient IPLM tool which block accidental IP leakage, so that your engineers can concentrate on bringing to market new SoC systems and products that satisfy export rules with the least manual overhead.

To read the complete 11 page paper on this topic, visit the Methodics site and register.

Related Blogs


China’s chip making impact hits DRAM first

China’s chip making impact hits DRAM first
by Robert Maire on 12-24-2019 at 6:00 am

China Memory

The Doctrine of Eternal Recurrence- (Nietzsche..) Deja’ Vu all over again…

The semiconductor industry has seen this movie before, several times….new entrant into the memory chip industry, disrupts the status quo and goes on to dominate the industry (until the next new entrant…)

The Japanese did it to the American chip industry; The Koreans did it to the Japanese; And now China will do it to the Korean chip industry…. And don’t forget about Taiwan in here as well….

Back in ancient chip industry history there used to be more than seven US manufacturers of DRAM (Intel, IBM, Motorola, Micron, Mostek National & TI) now there is only one left as Japan pushed the US out of the DRAM business….

Japan lost out to Korea as Japanese chip engineers spent their time off on weekends in Korea, making a few extra Yen by transferring know how and secrets to brand new , start up, Korean DRAM manufacturers.

We are likely at the beginning of China entering the memory market to eventually displace the existing Korean dominance. China has bought, begged, borrowed or stolen memory technology to get there

Many currently say it will never happen, or it will take too long or China will never get the technology or the manufacturing right but those statements have been heard before in the US and Japan (just before they lost their chip dominance at the time…) and we know how the movie ended…

China memory makers are share driven , not profit driven…

One key factor that must be understood is that a new entrant to a market (in not just the chip market..) is not driven by profitability but by market share and total revenue even at the expense of profits…..

Existing players want to maintain profitability and will cede market share to try to maintain profitability.

We have seen this before and see it every day in other “commodity like” markets that memory emulates.

China’s initial production of memory chips has nothing to do with profitability and everything to do with self-reliance in chips and the long game of market share and eventual market dominance.

China certainly has the resources and deep pockets to sell at a loss for a very long time in order to gain more than a foothold in the memory chip market.

In other words it really doesn’t matter if China can make memory chips on a cost competitive basis, it only matters that it can make them (which it seems to be doing)

Manufacturing at a profit can come later…much later

It doesn’t take a lot to upset the delicate supply/demand balance in the memory chip market

Much like the other giant, global, “commodity like” market , oil, the balance between supply and demand is a both very crucial and delicate balancing act that the industry maintains by ongoing , daily tweaking of supply to match the ever changing demand appetite of the global market.

Think of two heavy, giant elephants, one supply and one demand, in perfect balance on a seesaw….it doesn’t take a lot of weight, on either side, to throw the system out of balance and quickly impact stability (pricing and profitability).

Perfect equilibrium of supply and demand in the memory chip market is tough, if not impossible to achieve (much like the oil market) especially when you consider that the oil market has OPEC to regulate supply and the memory market seems to do it on an Ad Hoc basis (except for when the memory chip makers were caught conspiring…)

Memory makers have been pruning supply for over a year to try to get back in balance which it seems we are finally close to achieving.

With China entering the memory market, it would not take a lot of supply to destabilize the existing balance that the industry has worked so hard to achieve. Existing memory makers would have to cut production even further (and idle more semiconductor equipment tools…) to accommodate a new supplier to the market.

There have been some good past studies and analysis of the financial and competitive dynamics of the memory market

MIT study of the DRAM Market

China is further along in memory chips than expected

When theres a will theres a way……

I have heard a lot of people say that China will never catch up with Korean or other memory makers…we think that is a very short sighted statement that has been proven wrong in similar previous instances.

For those who doubted where China would be or where they would get memory technology we would point to Innotron (Now ChangXin Memory) which claims to be producing 20,000 19NM wafers of DRAM a month with capacity slated to double to 40,000 wafers per month by Q2 2020.

While 20,000 wafers per month is not a lot, getting to 40,000 a month starts to feel like enough to impact the current delicate, almost equilibrium in DRAM.

We think the impact of China on the DRAM market is not as far away as people think . How long will it take for ChangXin to get to 100,000 DRAM wafers a month?

Article on ChangXin memory

Even if this is an exaggeration that we commonly hear in China, its a pretty good one…..

A “Zombie” Qimonda?

We think that ChangXin is perhaps more “real” than other Chinese chip companies we have heard about because they apparently have gotten ahold of the majority of Qimonda’s DRAM technology and know how.  Unlike Fujian Jinhua that stole technology from a “living” US company , Micron, ChangXin got it from the now defunct Qimonda so there is no one around left to complain or object.

The US government doesn’t have any grounds to “blacklist” and put ChangXin out of business as it did with Jinhua which stole from Micron.

It could turn out that ChangXin is the Chinese resurrection of Qimonda come back to haunt the industry from beyond the grave in China.

We think this basically negates the argument that China will never get memory chip technology as they clearly already have it.

China memory chip industry emergence is poorly timed for an industry cyclical recovery especially in DRAM

The timing of China entering the DRAM market is not very good as it has been looking like DRAM recovery was delayed until the end of 2020 at best, with no new equipment purchase uptick expected until then.

China becoming meaningful in the DRAM market could certainly impact the cyclical recovery and delay or derail it although its too soon to tell.

China’s entry into the NAND market may be less impactful as the market is bigger with much more “elastic” demand. Yangtze (YMTC) is the clear leader in Chinese NAND and will likely emerge as the number one Chinese player but NAND has already started to recover and is more robust than still struggling DRAM.

China’s low equipment utilization could “catch up” with the equipment industry

If we take away semiconductor equipment sales to China the semiconductor equipment industry would be down, not up as it is now.

However, all that equipment that the US and others have sold to China has not been put to good, efficient use, as it has in Korea, Taiwan or the US.  A lot of China bound equipment has wound up in start up fabs or trailing edge fabs that are not turning out as much value in wafers.

As an example, China accounted for roughly a third of KLA’s business yet certainly China does not account for a third of all global semiconductor supply so it would seem that a lot of equipment is underutilized in China and not producing its proportionate share of wafers compared to equipment purchases

As that equipment comes up to speed and gets fully utilized it represents a large backlog of capacity that will come on line at some point as incremental capacity.

If all the equipment currently being sold to China were fully utilized the industry would be flooded with capacity.

This “overhang” will have to be managed as China comes up to speed in semiconductor production.

The equivalent in the oil industry would be a whole lot of oil rigs being sold to developing producers without a corresponding increase in production in the near term.  Sooner or later those new rigs will go into production somewhere and increase supply accordingly….

Summary

China entering the semiconductor market is a repeat of Japan, Korea & Taiwan’s entry and eventual displacement of existing players in the chip market

China will likely impact the DRAM market first and may get there soon enough to impact the expected cyclical recovery

China does not need to be a big supplier to upset the current memory market equilibrium and is not limited by profitability concerns

When China finally does come up to speed its large spending on equipment will clearly increase semiconductor supply

The Stocks

Right now, the concerns we have expressed above are much longer term in nature and the near term positive news of a potential trade deal is whats driving the positive tone in chips.

The lack of details of the China trade deal make us assume that we aren’t getting the details because the details are not good otherwise the details would have been tweeted out long ago in extreme minutiae. We also seem to be hearing more about agricultural products and not technology & chips. But the reality is that the details don’t really matter and all that matters is the headline that a deal has been struck.

This “derisking” headline is what seems to matter in the near term for tech issues and chips in particular.  Given the need for a “win” at this critical time, as well as the market’s reaction, we don’t think the administration will risk upsetting the cart with Huawei, Jinhua, intellectual property or other delicate or longer term issues.


IEDM 2019 – Applied Materials panel EUV Recap

IEDM 2019 – Applied Materials panel EUV Recap
by Scotten Jones on 12-23-2019 at 10:00 am

On Tuesday night of IEDM, Applied Materials held a panel discussion “The Future of Logic: EUV is Here, Now What?”. The panelists were: Regina Freed, managing director at Applied Materials as the moderator, Geoffrey Yeap, senior director of advanced technology at TSMC, Bala Haran, director of silicon process research at IBM, Ramune Nagisetty, senior principle engineer at Intel, Barbara De Salvo, silicon technology strategist at Facebook and Ali Keshavarzi, adjunct professor at Stamford University.

Each panelist presented their personal view on the topics discussed; theirs views do not represent the companies they work for. Furthermore, my typing skills are not good enough to get a verbatim transcript, the following is my summary/paraphrasing of what was discussed.

The panel began with each panelist presenting some key issues from their view:

Geoffrey Yeap

  • System on Integrated Chips (SoIC) new TSMC process.
  • Power Performance Area Cost Time – PPACT where new technologies need to be on-time.
  • Need more low-VDD operation focus.
  • Need a more energy efficient transistor.
  • Houston, we do have a problem, interconnect resistance is a problem.

Ramune Nagisetty

Moore’s law four phases:

  1. Denard scaling, dimensions drove performance.
  2. Post Denard strained silicon, HKMG, FinFET.
  3. DTCO (Design Technology Co-Optimization).
  4. Heterogeneous Integration – Chip-lets infrastructure and ability to mix and match technologies.

Bala Haran

The future of logic:

  • New architectures, nanosheets – more flexibility for design with Weff tunability, Epi defined channel not patterning, easier to scale. Dual Damascene Cu -> subtractive etch and alternative conductors.
  • Orthogonal elements – scaling, eMemory.
  • New materials and processes – for nanosheets you need volume-less work function using dipoles, integrated low temperature cleans, new materials.
  • System Technology Co-Optimization.

Ali Keshavarzi

  • Not all about scaling:
    • Moore’s law has slowed down.
    • Denard scaling is finished.
    • Von Neumann architecture of out of steam.
    • We need the next switch.
    • Communication energy has not scaled.
    • We need edge computing.
  • Todays approach:
    • Communication centric, device to cloud and back to act.
    • Will be too much energy, too much latency and too much data.
    • Lack of privacy and security.
  • Solution:
    • Edge computing before transmitting to the cloud.
    • Compute and act locally and then only transmit valuable data.
  • Three keys:
    • Small-system AI locally.
    • Intermittent computing – instant, eNVM + arch + software.
    • Burst communication that is context aware.

Barbara De Salvo

  • FinFET, 7nm, 5nm, 3nm, GAA, Vertical GAA, 2D Materials, etc.
  • What will the next application be?
  • Showed first “personal computer” and current smart phone.
  • Not so distant future – augmented reality glasses – can see reality but also project enhancements, see in low light, see people from remote locations.
  • Requirements:
    • Optics and display.
    • Computer vision.
    • System design.
    • User experience.
  • Extremely difficult.
  • Objectives for AR silicon:
    • 100x current performance/power.
    • Form factor – size of glasses.
    • Wireless – always connected.

Following the individual presentations, the panel discussion began with Regina Freed asking questions and then various panelists providing comments.

Regina Freed – what do we need to scale?

  • Geoffrey Yeap – EUV opened the door, 5nm less masks for first time but interconnect resistance is an issue.
  • Ramune Nagisetty – parasitics are an issue.
  • Bala Haran – materials for reliability and route-ability.

Regina Freed – what do we need to enable this?

  • Ramune Nagisetty – GAA, with all the papers we don’t have it yet, but it is a better transistor. Monolithic 3D and advanced packaging to put together heterogeneous technologies.

Regina Freed – what do we need for materials?

  • Bala Haran – Epitaxy will be the new multi-patterning and area selective deposition, atomic layer etching.

Regina Freed – are we going to use more materials?

  • Bala Haran? – Take out radioactives and noble gases and there are about 67 elements and we use about half of them. Over next decade we will use 50% of the ones that are left.

Regina Freed – do we need something else for interconnect?

  • Geoffrey Yeap – we need a super conducting contact at room temperature.

Regina Freed – what do you think of buried power rail?

  • Bala Haran – thinks it is a great concept, IBM had eDRAM with buried metals and there were a lot of challenges, BPR looks a lot like that.
  • Ramune Nagisetty – thinks we will get there, power delivery is increasingly challenging, we will need it.

Regina Freed – AR/VR needs something very new, low power, small form factors, see through materials, what is needed from IC design?

  • Barbara De Salvo – what do we really mean for PPA for 5nm, 3nm. Designer requirements are really different, high performance devices are always active, many users on same server so always used. In AR/VR long stand by, leakage is very important. Most of Moore’s law is for high performance, they need some way to customize the core technology. There are a lot of different markets and they need differentiation.

? to Geoffrey Yeap – a lot of your spending is driven by your customers.

  • Geoffrey Yeap – we will pick a platform approach and then will customize for application around core platform for cost and yield. 5nm platform will serve 5G and server and then will customize.
  • Barbara De Salvo – they want very low power and it is data transfer and memory access that is most costly. Several factors of difference between computing power and data movement. The core technology needs embedded memory. Memory developed for markets so far have not been at leading edge.
  • Ali Keshavarzi – develop RRAM that is 1,000x or 10,000x better for memory in compute or AI? Micro drone has to be smart and low power and make decisions on board. Need to change the memory hierarchy, some of the learning to SRAM and some to eNVM.

Regina Freed to Bala Haran – what is needed?

  • Bala Haran – High performance – low power is being touted for FDSOI with eMRAM for some applications and FinFET and then nanosheets for high performance and the requirements for these two are very different.
  • Ramune Nagisetty – it’s such a different optimization point, look back to the iPhone that drove the technologies at foundries. There really needs to be a big customer that drives things like Apple.
  • Geoffrey Yeap – there needs to be a big business pull to drive it.
  • Ali Keshavarzi – what are you willing to pay and business case.
  • Bala Haran – eMRAM for automotive is responding to the marketplace in the legacy nodes.
  • Barbara De Salvo – for many years there was criticism of NAND Flash by NOR for reliability. Some companies never invested in the technology. When the application occurred, NAND took over. The technology needs to be ready for the application.
  • Ali Keshavarzi – one argument in the past for embedded memory to only be on legacy nodes was for material compatibility. Lots of work at the conference on HfOx fero memory that is FinFET compatible. Maybe Facebook and TSMC should work on it and both be happy.
  • Geoffrey Yeap – slightly different view, in the past 50 years business model has become the foundry model. TSMC service is king, they listen to customer and do what they need. In the right time the right technology will be there.
  • Ali Keshavarzi – someone had to provide the leadership.
  • Barbara De Salvo – for the innovation the core of software is very important, and it is addressed by the current model. To address the system, you need design and software.
  • Ramune Nagisetty – we have an example when Alex net won the image-net competition, dataset, GPUs and algorithms. 1990s MIT researchers had backpack computers and glasses. There will be some confluence that will bring this all together. There are technologies that will meet the needs of AR/VR are in the pipeline.
  • Ali Keshavarzi – you need to worry about performance per watt or it will go away.
  • Ramune Nagisetty – it won’t go away; it will be there until it is met.
  • Bala Haran – before we talk about anything else let’s talk about memory because most of the die is memory and GPU. Intel has a nice paper on L4, we need to look at double stacked MTJ, need to look at L3.
  • Regina Freed – are you saying the future of logic is memory
  • Bala Haran – the requirements for AI memory are different, you can live with more errors and that will drive down power. Nonvolatile for in-memory compute and analog elements for neuromorphic computing with 1,000x improvement.
  • Barbara De Salvo – agrees for performance and edge devices but right now data transport is the issue.

Bala Haran asked Ramune Nagisetty – how do you see packaging?

  • Ramune Nagisetty – take novel parts and memory and put them together in packages. You can take HBM and put it near the processor and it is the first toehold in the space. Packaging enables some novel memories even if they’re ready now but can’t be integrated with CMOS, they can be integrated by packaging, so it enables and accelerates.

Bala Haran asked Geoffrey Yeap – how do you manage legacy and leading edge?

  • Geoffrey Yeap – turned it around and said let the market decide. Provide leading edge and legacy and chip-let and packaging technology and let the customers decide how to use the tools. At large volume the market will force cheaper. He remembers when SRAM was a separate chip until the market decided it should be on the logic chip.
  • Ali Keshavarzi – we all understand the market, bring the chips closer with chiplet but it isn’t a monolithic solution on the die. Going chip to chip there is a power penalty.
  • Ramune Nagisetty – in a 3D stack energy can be much less.
  • Ali Keshavarzi – if you really want to map it in SRAM you need a complete wafer that is SRAM. We need to be very clever and with the business forces.

Regina Freed – We all talked about heterogeneous integration, what do we need to do to make it almost as good as on die?

  • Ramune Nagisetty – tiling tax, power and interconnect penalties for going die to die. 3D and then layer transfer further reduce the tiling tax. Business model where you get the best in class die from TSMC, GF, Intel and integrate and integrate it and there is a failure who owns the problem. A lot of problems that are partly business and partly technology. We already have a model with the PCB industry with parts from all over the world and everyone gets paid. Heterogeneous with chip-lets where it comes together and looks like Legos.
  • Barbara De Salvo – what about design tools for this.
  • Ramune Nagisetty – yes there needs to be tools, flows and methods.

Regina Freed asked Geoffrey Yeap – are you thinking about enabling this?

  • Geoffrey Yeap – if the customer asks for it.
  • Bala Haran – one thing I would add on is a consolidation of OSATs or suppliers and it hasn’t happened in the packaging world. Too many options, panels, 2.5D, etc.
  • Ali Keshavarzi – can you put the chips in a mold, RDL, extremely inexpensive.
  • Ramune Nagisetty – the low-cost run is often related to volume, even something that seems inexpensive is expensive if you don’t have volume.

Regina Freed – can we trade off cost for performance to get to market?

  • Ramune Nagisetty – cost is definitely important, being efficient, cost per function efficacy.
  • Ali Keshavarzi – we have covered this.

Regina Freed – Last question before going to audience, until recently our model was serial with true collaboration with end user, do we need more collaboration?

  • Ramune Nagisetty – we have had consortia in the past, not everyone is working in serial. She thinks the most interesting thing today is the cloud service providers creating their own chips.
  • Barbara De Salvo – software – hardware optimization and customization of technology. Even in R&D it needs to be a view to the whole system. New players like Facebook, thinks it will be different in the future.
  • Bala Haran – look at DRAM and Flash, deep collaboration between companies, thinks logic needs to have companies specialize in each piece.
  • ? – We are all asking a lot of things from the industry but a lot is possible. New materials and processes and advanced packaging. Evolution is here and a combination of advanced technologies.

AI the Matrix and Intel

AI the Matrix and Intel
by Daniel Nenni on 12-23-2019 at 6:00 am

AI Matrix Intel

I would guess that most people have seen or at least heard of the Matrix movies but how many people can remember who vanquished the earth to begin the series? It was artificial intelligence (AI) of course which seemed pretty far fetched 20 years ago, but today not so much. In fact, for those of us in the AI know it seems quite likely in some fashion. Hopefully a group of hackers will save us all in the end, like in the movie. By the way, the fight scenes are a good example of machine learning (ML) and how it will only get us so far.

Intel made a surprising move to some people (outsider analysts mostly) and purchased Habana Labs for $2B. Surprising because in 2016 Intel purchased Nervana Systems for about $400M. On the inside however let’s call the Nervana purchase an AI people learning (PL) experience for Intel that lead to the Habana purchase. If you look at the executive staff at Habana and compare it to Nervana you will see why. Habana is stacked with silicon implementation experts and Nervana didn’t even do their own chip, an ASIC company did.

Remember the statement Intel CEO Bob Swan made about “destroying the Intel idea of keeping the 90% CPU market share and focusing on growing other market segments.” I would say this $2B acquisition suggests that his statement was a strategic head fake.

Moving forward I would now liken Intel’s data center dominance to a merger of Nvidia, AMD, and Xilinx because that is what it will take to beat Intel to the Matrix, absolutely.

After Netflix binge watching the Matrix (1999), The Matrix Reloaded (2003), and the Matrix Revolutions (2003) I truly expect the fourth Matrix movie (2021) to have a serious technology update.

From my favorite technology futurist:

“The rate of change of technology is incredibly fast. It is outpacing our ability to understand it. Is that good or bad? I don’t know.” ELON MUSK

Bad!

“We are already a cyborg. Because we are so well integrated with our phones and our computers.” ELON MUSK

Understatement!

“As AI gets much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger” ELON MUSK “I do think we need to be very careful about the advancement of AI.”

Absolutely!

During the day I do semiconductor ecosystem mergers and acquisitions. During the night I transform into SemiWiki blogger extraordinaire and semiconductor futurist which is why I am intrigued by the proposed Broadcom offload of the $2.2B Wireless RF group. Hock Tan is one of my favorite semiconductor CEOs and I am very happy to see that he is diversifying away from the margin constrained fabless chip business. As I have said before, the systems companies will again rule the semiconductor industry, doing to the fabless chip companies what the IDMs did to them 30 years ago.

We documented this in our book on the History of ARM in the chapters on Apple and Samsung. Apple’s SoCs are industry leading because they can develop the silicon around the system. The other mobile SoC giants have followed suit (Huawei, Samsung, MediaTek, etc…). It’s only a matter of time before the cloud giants (Google, Amazon, and Microsoft) do the same thing, cut out the fabless middlemen. Other strategic systems companies are sure to follow so great move on Hock Tan’s part, my opinion.


The Tech Week that was December 16-20 2019

The Tech Week that was December 16-20 2019
by Mark Dyson on 12-22-2019 at 6:00 am

As we approach the end of 2019 I wish everybody a Merry Christmas and a Happy New Year. This will be my last update for a few weeks as I will also take a little break over the holiday season.

Despite a lot of people winding down for the year, there was still lots of interesting news from last week with lots of data points pointing to an even better 2020, so read on.

SEMI is predicting that the tide has turned and that 2020 looks positive for industry with many positive indicators. The global purchasing managers index has started to improve after a steady decline and is now back up above 50, expansion territory, in November. In addition equipment manufacturers sales showed a 2% QoQ improvement in Q3, and up 0.4% on Q3 a year ago. With many other indicators also indicating growth, things are looking good for 2020.

Micron also announced this week on their earnings call that they have reached the bottom and expect recovery in 2020, in addition they announced they had obtained all requested licenses to ship some products to Huawei. Microns fiscal Q1 total revenue was US $5.1 billion, up 6% sequentially but down 35% YoY. DRAM sales which represents 67% of their revenue was up 2% on last quarter but down 41% YoY, whilst NAND showed better performance up 18% sequentially and only down 14% YoY.

Microns assessment that they are at the bottom is in line with DRAM prices, which showed a rebound this month, with prices up more than 10% on the low in December last year. DRAM eXchange is predicting that prices will rally as early as the first quarter in 2020.

Taiwan semiconductor sector is expecting a growth of 5% in 2020 due to strong demand from AI applications and 5G infrastructure, according to Taiwans Industrial Technology Research Institute. This prediction is in line with IHS Markit’s prediction that global semiconductor revenue will rise 5.9% in 2020.

Self driving autonomous cars are still coming, but the optimism that they will be here soon has died down and exactly when we will really see them in everyday use has been pushed out at least several years by most of the major car manufacturers. This is an interesting article by CNN which reviews the current status and challenges facing autonomous cars.

TSMC’s 5nm technology is on track for release next year, was the message from TSMC at the IEDM conference last week. They promise devices 15% faster or 30% more energy efficient compared to 7nm, and SRAM cells that at 0.021 sq mm.

Taiwans assembly test subcon ASE plans to acquire France based Asteelflash, Europes 2nd largest EMS company for US$450million. The deal will allow ASE to extend it’s worldwide presence and expand it’s production of automotive devices.

As the year comes to a close it’s time to review some of the advances that have happened in 2019. Laser Focus World has published it’s top 20 photonics technology picks for 2019.

Whilst LED magazine has list of it’s top 20 news articles in 2019 from the LED and lighting industry.

Finally if you are reading this article on a smartphone or tablet, the night mode setting on your device may not be helping you to go to sleep. According to researchers from Manchester University blue light is not the main problem preventing you from sleeping and based on their study they recommend dim and blue light is more restful. The main problem is probably that you are using your device just before you go to sleep and stimulating your brain or worrying about that latest email you just read.


Debugging Hardware Designs Using Software Capabilities

Debugging Hardware Designs Using Software Capabilities
by Daniel Nenni on 12-20-2019 at 6:00 am

Every few months, I touch base with Cristian Amitroaie, CEO of AMIQ EDA, to learn more about how AMIQ is helping hardware design and verification engineers be more productive. Quite often, his answers surprise me. When he started describing their Design and Verification Tools (DVT) Eclipse Integrated Development Environment (IDE), my first reaction was that engineers had plenty of GUIs at their fingertips already. When he talked about Verissimo SystemVerilog Testbench Linter, I said that lint surely must be a solved problem by now. Then I wondered how the Specador Documentation Generator differs from all the shareware solutions available. In my most recent talk with him, the topic was AMIQ EDA’s DVT Debugger, their fourth major product. Given that simulators have built-in debuggers I was curious once again how their tools are differentiated and how they actually make money.

As in our previous discussions, Cristian was clear in describing the limitations of other solutions, including features built into other tools. In the case of interactive debugging of test cases, the major simulators do have some nice capabilities. However, the GUIs are different and proprietary, so moving from an IDE to a simulator for debug is jarring. If the project uses multiple simulators, a not uncommon practice, the engineers are cycling through multiple screens constantly. The DVT Debugger is an add-on to the DVT Eclipse IDE, so users can debug in the same environment that they use to write, analyze, and visualize their design and verification code in SystemVerilog, VHDL, or the e language. The tool supports all major simulators, so even with multiple vendors involved the debug interface is unchanged.

The DVT Debugger provides all the interactive functionality that software programmers enjoy, applied to design and verification code. The debugger can launch a new simulation run or connect to an existing run on the same machine or on the network. Users can insert breakpoints into their code, including conditional breakpoints, and enable or disable them. A breakpoint stops a running simulation to allow examining the values of variables to see what is happening in the design and testbench. It is possible to change variable values before resuming the run or starting a new one. Under user control, the debugger can step line by line through the code, step over (skip) a line of code, or step into or out of a function. The complete call stack is displayed, and users can move up or down. Users can define and watch complex expressions for more insight into the running code. Further, dedicated views display the simulation output and allow typing commands directly to the simulator.

While using all these debugging features, users remain within the IDE. They can take advantage of all the navigation and visualization features for which the DVT Eclipse IDE is known. These include tracing signals, finding usages, generating schematic views, and cross-probing across the wide range of available views. The Debug View and the code editor are always synchronized. For example, when the user moves up and down the call stack, the active line corresponding to the selected stack frame is automatically highlighted. Similarly, the Variables View displays the variables associated with the stack frame selected in the Debug View. These include the arguments of the current function, locally declared variables, class members, and module signals. Users can change variable values at runtime from this view.

A powerful debugger is required for modern hardware designs. Cristian reminded me of the old-fashioned way of debug: adding print statements to the code to trace what’s happening. Well-designed debug messaging is valuable, but iteratively adding temporary statements is tedious and error-prone since engineers must guess the source of a test failure and re-compile every time they change the code. These temporary print statements should be deleted so they do not reduce code readability and clutter simulation output once the bug is fixed, but editing code excessively introduces more risk. Controlling a simulation as a test runs, having full visibility into all variables, and modifying variables to exercise “what-if” scenarios make for a more scalable and more efficient process.

I asked Cristian whether DVT Debugger users ever use the debuggers built into the simulators, and he said that they do. Simulation vendors provide a lot of “hooks” for other tools to link in but there may be features available only in their own debuggers that require proprietary connections. He said that the goal of their tool is not to replace simulator debuggers but rather to offer a rich, software-like debug experience in the same environment where design and verification engineers write their code. As in their other products, AMIQ EDA has taken powerful, proven techniques originally developed for programmers and adapted them to add value to the hardware design and verification flow. As Martha Stewart used to say, it’s a good thing.

To learn more, visit https://www.dvteclipse.com/products/dvt-debugger.

Also Read

Automatic Documentation Generation for RTL Design and Verification

An Important Next Step for Portable Stimulus Adoption

With Great Power Comes Great Visuality