RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Mentor Helps Mythic Implement Analog Approach to AI

Mentor Helps Mythic Implement Analog Approach to AI
by Tom Simon on 02-27-2020 at 6:00 am

Mythic AMS Verification Challenges

The entire field of Artificial Intelligence (AI) has resulted from what is called “first principles thinking”, where problems are re-examined using a complete reassessment of the underlying issues and potential solutions. It is a testament to how effective this can be that AI is being used for a rapidly expanding number of applications that previously challenged or defied traditional approaches in programming. Even using conventional CPU based architectures AI offers enormous advantages over conventional sequential “instruction based” coding in a wide range of fields, including autonomous driving, sensor data analysis, resource optimization, IoT, safety systems, etc. Yet even more impressive improvements in AI performance have come from the use of optimized AI processors.

Some of these AI processors rely on several well understood concepts that can improve the efficiency of the types of computations made in a neural network. Adding parallelism is the first approach, the other is to move memory closer to the processing elements. The AI chip company Mythic is making AI accelerator chips that use these proven methods and adds to them with an ingenious new “first principles” approach.

The seed for their idea is that in Ohm’s law V=IR is multiplication. The multiply-accumulate (MAC) operation is the mainstay of AI neural network implementation. Digital multiplication is cumbersome, and frequently inefficient and slow, even if reduced to 8-bit precision which – works well enough for many recognition and inference tasks.

Mythic has introduced analog computation as the method for performing MAC by using Flash memory cells as precision resistors to hold training coefficients. When voltage values are run through the flash memory cells the output current is the result of an analog computation. Using the memory cell as a computation unit saves not only memory access, but also significantly reduces computation time.

 

However, this requires significant analog design expertise, especially in designing memory cells and analog to digital converters. Accuracy is essential and it is extremely important to ensure that the entire computation is performed accurately.

Because this is a mixed signal design, SPICE simulation alone is not adequate for verification. Mixed signal simulation is called for. Mentor, a Siemens business and Mythic just made an announcement about Mythic’s use of Mentor’s Analog FastSPICE (AFS) and Symphony mixed signal simulation platform to simulate and verify the thousands of ADCs that are needed in their designs, and to verify overall chip performance. This involves RTL simulation along with the analog simulation.

Mythic chose Mentor’s Analog FastSPICE because of its proven speed and accuracy at nanometer-scale. It has demonstrated excellent correlation with silicon when performing full spectrum device noise analysis. The Symphony mixed signal simulation platform helps to verify the integration of digital and analog logic in their Intelligence Processing Units (IPUs). Mythic say they have been very pleased with the intuitive use model, powerful debugging features and configuration support.

The development of electronic systems is a layered process involving a chain of steps vital for reaching success. First principles are not only being used in the last step of chip design, they were applied by Mentor as well in the development of their enabling solutions. It’s conceivable that if Mythic wanted to apply their innovative approach and the needed supporting tools were not available, they might not have had the technical success they are enjoying today. The full announcement of Mythic’s use of Mentor’s analog and mixed signal solutions is available on the Mentor Website.


Thermal Issues and Solutions for 3D ICs: Latest Updates and Future Prospect

Thermal Issues and Solutions for 3D ICs: Latest Updates and Future Prospect
by Mike Gianfagna on 02-26-2020 at 10:00 am

2D vs. 3D heat maps

At DesignCon 2020, ANSYS held a series of sponsored presentations. I was able to attend a couple of them.  These were excellent events with the material delivered by talented and high-energy speakers. The DesignCon technical program has many dimensions beyond the conference tracks. One of the presentations dealt with 3D ICs. It was presented by Professor Sung-Kyu Lim from the School of Electrical and Computer Engineering at the Georgia Institute of Technology.

The work presented by Professor Lim is funded by DARPA, Arm and ANSYS. I should also point out Professor Lim’s student, Lingjun Zhu contributed to this work as well. The discussion focused on thermal, IR-drop and PPA analysis of 3D ICs built with Arm A7 and A53 processors. Since 3D IC can mean many things, Professor Lim’s focus was on bare die stacking. He reviewed several designs using these techniques from companies such as GLOBALFOUNDRIES, Intel and TSMC.

First, a bit about the design flow used for these test cases. Professor Lim took a practical approach here, adapting commercially available 2D IC design tools to a 3D design problem. Logic/memory designs were decomposed into two tiers, one for logic and one for memory. First, the memory tier was designed, resulting in a pinout for that tier. Then a double metal stack was created. This allowed the memory tier and the logic tier to communicate through dense connections using TSVs, face-to-face pads, or monolithic inter-tier vias (MIVs). Next, the logic tier was placed and routed along with connections from the memory tier that were also represented in the logic tier.

The results of this approach were discussed for an Arm Cortex A7 design, containing L1, L2 cache and logic.  All of the L2 and some of the L1 cache were placed on the memory tier and the rest of the design was implemented on the logic tier. Interconnect between the cache and logic was shortened quite a bit as a result of this approach. A similar process was applied to a Cortex A53 design. See below.

The results of these experiments yielded a smaller footprint thanks to the two-tier approach and a performance improvement thanks to the shorter routes. In turn, this resulted in more power, higher IR-drop and increased temperature, thanks to the faster operating speed. The results are summarized below.

Experiments were run on power savings as well.  In this case an LDPC error correction circuit was used. Due to shorter wire lengths and smaller capacitors, a 39% power saving was achieved, illustrating another advantage of 3D design.

Going back to the Arm designs, below are heat maps of the various experiments between 2D and 3D to facilitate thermal comparisons.

Professor Lim then discussed the tool flow used for these analyses. ANSYS RedHawk was used extensively to perform many tasks, including power, thermal and IR-drop analysis. All of this work was based on very fine-grained analysis of each routing segment and device across many temperature profiles. Below is an overview of the flow.

Professor Lim concluded his talk with a discussion about the impact thermal awareness could have on IC design.  He proposed a temperature-aware timing closure flow that would update circuit performance based on actual temperature gradients, which can now be calculated. This approach could produce designs that are much more robust in real-world environments. Below is an overview of the proposed flow.

To learn more about thermal-induced reliability challenges and solutions for advanced IC designs,please check out this recent ANSYS webinar.

 

 

 


Hybrid Verification for Deep Sequential Convergence

Hybrid Verification for Deep Sequential Convergence
by Bernard Murphy on 02-26-2020 at 6:00 am

Hybrid Verification Synopsys

I’m always curious to learn what might be new in clock domain crossing (CDC) verification, having dabbled in this area in my past. It’s an arcane but important field, the sort of thing that if missed can put you out of business, but otherwise only a limited number of people want to think about it to any depth.

 

The core issue is something called metastability and arises in systems which must intermingle multiple clock frequencies – which is pretty much any kind of system today. CPUs run at one frequency, interfaces to external IOs run at a whole galaxy of different frequencies, AI accelerators maybe another frequency. Clockwise, our systems are all over the map.

When data is exchanged between these different domains, metastability gremlins can emerge, random chances that individual bits can be dropped or delayed, neither quite making it through the gate to the other side nor not making it. Bitwise there are solutions to this problem, metastability hardened gates (actually registers), though these are also statistical in their ability to limit problems. They’re better than crossings that aren’t hardened, but still not perfect, because this is engineering where perfect is never possible.

Still, if you improve matters to the point that the design meets some acceptable time between failures, everything should be OK, right?

Afraid not. There’s a problem in CDC called convergence. You have two independent signals from one clock domain, crossing into another. Each separately passes through a metastability hardened gate. They later combine in some calculation in the new domain – maybe “are these signals equal?”. This could be multiple clock cycles later.

Now you may (again statistically) hit a new problem. Metastability hardening ensures (statistically) that a signal gets through or doesn’t get through – none of this “partly getting through”. But in doing that, what emerges on the other side is not always faithful to what went in. It might be delayed or even dropped. Or not –accurately reflecting what went in is also an option.

So when you recombine two signals, separately gated like this you can’t be sure they are fully in-sync with the way they were on the other side of the gates. On the input side they might have been equal, but when they’re recombined, they’re not. Or at least not initially; maybe they become equal if you wait for a few cycles. At least as long at the inputs on the other side didn’t change in the meantime.

In VC SpyGlass we’d do a static analysis complemented by some level of formal analysis to try to catch these cases. That isn’t a bad approach as long as re-combination happens within one cycle. But who’s to say such a problem may not crop up after many cycles? Try to trace this using formal methods and you run into the usual problem – analysis explodes exponentially.

The better method, now becoming more common, is a combination of static and dynamic analysis. Use static CDC analysis to find crossings and recombination suspects, then use dynamic analysis to test these unambiguously, at least to the extent that you can cover them.

Synopsys now provides a flow for this, combining VC SpyGlass and VCS analysis. This is a refinement of a commonly used technique called a jitter injection flow, a method to simulate these random offsets. That method randomly injects random delays into the simulation when data input to a gate changes.

There are some technical challenges with the standard injection method – you should watch the webinar for more detail. Synopsys say they have made improvements around these limitations. An important challenge that jumped out at me is that there is no obvious way to quantify coverage in that approach. How do you know when you’ve done enough testing?

Himanshu Bhatt (Sr Mgr AE at Synopsys) explains in the webinar how they have improved on traditional jitter injection testing and also on the coverage question and debug facilities they provide to trace back problems to metastability root causes. You can register to watch the webinar HERE.


Webinar – FPGA Native Block Floating Point for Optimizing AI/ML Workloads

Webinar – FPGA Native Block Floating Point for Optimizing AI/ML Workloads
by Tom Simon on 02-25-2020 at 10:00 am

block float example

Block floating point (BFP) has been around for a while but is just now starting to be seen as a very useful technique for performing machine learning operations. It’s worth pointing out up front that bfloat is not the same thing. BFP combines the efficiency of fixed point operations and also offers the dynamic range of full floating point. When examining the method used in BFP I am reminded of several ‘tricks’ used for simplifying math problems. The first that came to mind was the so-called Japanese multiplication method, which uses a simple graphical method for determining products. Another, of course, is the once popular yet now nearly forgotten slide rule.

As will be explained in an upcoming webinar, by Mike Fitton senior director of strategy and planning at Achronix, on the topic of using BFP in FPGAs for AI/ML workloads, BFP relies on normalized fixed point mantissas so that a ‘block’ of numbers used in a calculation all have the same exponent value. In the case of multiplication, only a fixed point multiply is needed on the mantissas and a simple addition is performed on the exponents. The surprising thing about BFP is that it offers much higher speed and accuracy with much lower power consumption than traditional floating point operations. Of course, integer operations are more accurate and use slightly lower power, but they lack the dynamic range of BFP. According to Mike BFP offers a sweet spot for AI/ML workloads and the webinar will show supporting data for his conclusions.

The requirements for AI/ML training and inference are very different from what is typically needed in DSPs for signal processing. This applies to memory access and also for math unit implementation. Mike will discuss this in some detail and will show how the new Machine Learning Processor (MLP) unit they built into the Speedster7t has native support for BFP and also supports a wide range of fully configurable integer and floating point precisions. In effect their MLP is ideal for traditional workloads, and also excels at AI/ML, without any area penalty. Each one has up to 32 multipliers per MAC block.

Achronix MLPs have tightly coupled memory that facilitates AI/ML workloads. Each MLP has a local 72K bit block RAM and a 2K bit register file. The MLP’s math blocks can be configured to cascade memory and operands without using FPGA routing resources. Mike will have a full description of the math block’s features during the webinar.

The Speedster7t is also very interesting because of the high data rate Network on Chip (NoC) that can be used to move data between MLPs and/or to other blocks or data interfaces on the chip. The NoC can move data without consuming valuable FPGA resources and avoids bottlenecks inside the FPGA fabric. The NoC has multiple pipes that are 256 bits wide running at 2GHz for a 512G data rate. They can be used to move data directly from the peripherals, like the 400G Ethernet, directly to the GDDR6 memories without requiring the use of any FPGA resources.

Achronix will be making a compelling case for why the native implementation of BFP in their architecture that includes many groundbreaking features is a very attractive choice for AI/ML and a wide range of other more traditional FPGA applications such as data aggregation, IO bridging, compression, encryption, network acceleration, etc. The webinar will include information on real world benchmarks and test cases that highlight the capabilities of the Speedster7t. You can register now to view the webinar replay here.


Build Custom SoC Assembly Platforms

Build Custom SoC Assembly Platforms
by Bernard Murphy on 02-25-2020 at 6:00 am

STAR RTL design builder

I’ve talked with Defacto on and off for several years – Chouki Aktouf (CEO) and Bastien Gratreaux (Marketing). I was in a similar line of business back in Atrenta. Now I’m just enjoying myself, I’ve written a few blogs for them. I’ll confess I wondered why they wouldn’t struggle with the same problems we’d had. Script-driven RTL editing, design restructuring, real enough problems for which a solution is needed only infrequently. Recently I had an animated discussion with Chouki and now I believe I get it.

To explain, I need to back up a couple of steps. First, automating SoC assembly and related functions is now very common. A lot of this process is very mechanical – dropping in IPs and hooking up top level connections, easy to automate through a script and a bunch of spreadsheets. And where it isn’t purely bookkeeping, it lends itself very well to further script-driven additions – in hookup for IO, power management, interrupts, in the software interface through register and memory-map definitions.

IP-XACT was going to be the unifying standard behind all of this, and some organizations bought in enthusiastically – NXP, certain groups in ST and some groups in Samsung for example. Multiple IP vendors also bought in. What’s not to like about having standardized interfaces with your customers?

A lot of design houses weren’t so sure. Their in-house solutions worked fine. When it was time to upgrade, they’d work on the next generation of their solution – I had s similar discussion with Qualcomm years ago – but they weren’t comfortable with going all the way to IP-XACT. They liked the flexibility of being able to go outside the lines if they needed. They also had a lot of legacy databases in CSV and other formats they knew how to read, which would be a hassle to manage in switching to the standard.

But they still liked IP-XACT (along with other views) as a way to get IP from vendors. In other words, they wanted it all. Standards where it suited them, backward compatibility with legacy data, and flexibility to adapt and innovate at their pace, not the pace of an industry standard.

This is not a great starting point for a canned product. It’s a much better recipe for a platform/infrastructure product. Something that will take care of the mechanics of reading and writing multiple formats, from CSV, to Excel, to RTL to IP-XACT, etc, and provide a centralized object model, on top of which you can script to read, write or modify to your heart’s content.

Who cares about this? Pretty much anyone doing SoC design. It doesn’t take a lot of effort to figure out that Apple, Google, Samsung, Qualcomm, storage guys and many others are recruiting people with IP-XACT expertise and/or talking about what they’re doing in a variety of conferences. I’m sure none or few of them are diving head-first into full-blown IP-XACT. I didn’t get this from Defacto, I just did a little searching.

So the big reveal – this is what Defacto is providing. An infrastructure to take care of all the read, modify, update mechanics across all these formats through a unified, persistent datastructure, letting customers build their value-add in scripting on top of APIs to the object model. They also provide a number of implementation-centric functions and checks.

Now that makes sense to me.

You can learn more about Defacto HERE.

Also Read

Another Application of Automated RTL Editing

Analysis and Signoff for Restructuring

Design Deconstruction


China Chip Equip Embargo just got real

China Chip Equip Embargo just got real
by Robert Maire on 02-24-2020 at 10:00 am

Chip Embargo Just Got Real

Worst Case Scenario now possible!

  • Embargo could extend beyond China to Taiwan (TSMC)
  • Likely backs up ASML pressure & Huawei indictment
  • “Maximum Pressure” campaign similar to Iran
  • Not likely to go away through trade negotiations

US to restrict chip equipment sales to Huawei producers
The Wall Street Journal confirmed, what we have been saying for a while, that US semiconductor equipment companies will likely be restricted in foreign sales. The confirmation goes beyond our likely worse case scenario of halting China sales to halting sales to any company that could supply Huawei with chips.

This would go well beyond China and would include Taiwan (TSMC) and other countries that could produce any sort of chip that could help Huawei.  This is much more far reaching and impactful than even a disastrous embargo on China alone.

Collision Course with Taiwan
This potential embargo would put us on a collision course with Taiwan by essentially forcing them to take sides in the semiconductor war between the US and China. Given that TSMC makes chips for Huawei, this would in essence force TSMC to stop doing business with Huawei much as we forced the Dutch from doing business (in EUV) with China as well.

This would put even more pressure on China’s desire to re-unite the run away province of China otherwise known as Taiwan. It could get even uglier very fast.

Embargo has been a long time coming, won’t go away quickly
We predicted a potential embargo of US semiconductor equipment sales to China starting two years ago. We were roundly criticized and dismissed by most in the industry and other analysts who suggested “it will never happen”, and here we are, its happening. We then predicted a year and a half ago that ASML would be the first to be impacted , and were again scoffed at, but then it happened.

We also said in our recent report on the ASML blocked EUV tool that it would be difficult for the US to keep up the pressure on the Dutch when the US continued to sell equipment China. We  said that this meant that the US would likely follow through with its own restrictions on US equipment makers….and here we are. Exactly as predicted.

This is not a trade negotiations bargaining chip

Maximum pressure
The recent indictment of Huawei and this potential equipment embargo do not appear to be bargaining chips in the trade discussions as the discussions are over and these points were not raised during the discussions. The ASML pressure campaign was also done in the dark, behind the scenes.

These multiple line of attack prongs look a lot like the maximum pressure campaign on Iran, which is a multi faceted attack across many fronts attempting to get a specific outcome. In this case it is disabling Huawei and saving 5G for the US to lead.

All this suggests that these pressures will not go away quickly or easily and will not be negotiated by the stroke of a pen, its long term trench warfare.

US equipment companies could get sacrificed
From a political fallout perspective, the three major US equipment companies, Applied Materials, KLA and Lam are all in California which the current administration does not count on for support and seems to be at war with on several other fronts. There are no equipment companies in Kentucky or swing states, nor are any farmers harmed.

Much as we suggested that ASML would be a good opening shot due to zero political fallout, sacrificing US equipment companies does not have significant political consequences, as the US ratchets up the pressure.

Impacts more than majority of revenues
Its probably safe to say that a Huawei related chip equipment embargo could impact virtually all customers with the potential exception of Intel and even Intel could be affected if it wanted to sell chips to Huawei (albeit a very low probability..). It could obviously apply to Samsungs memory business as well as their foundry business. This could be a nightmare of enforcement of trying to figure out which equipment sold to whom was used to make chips for who? The semiconductor industry is so interconnected and international that it would have global impact.

This news has already permanently damaged US equipment makers
If a chip maker were on the fence about US versus Japanese equipment or Israeli or Korean, its clear the choice would be a foreign supplier to reduce the risk of the US government intervention. It obviously redoubles the efforts for China to make its own equipment. Even non-US equipment makers are vulnerable as ASML found out, but probably to a lesser extent.

Supports our more cautious view of the stocks

The China related risks to the chip and chip equipment industries change so fast it makes your head spin.First we had trade risk, then we have had Corona risk, now we are back to a US government policy risk against China.

Death by poison, death by hanging or death by firing squad…take your pick…they are all negative.  It just adds up to more risk against a back drop of stocks that have been on a tear.

In our view, Corona has been correctly discounted by the market as a transitory, temporary issue that we will get over. The trade dispute seems to have been somewhat dealt with by some agreements and kicking cans down the road.  The embargo risk is very long term in nature, much more widespread than China alone and is quite complicated given all the inter relationships. It would seem logical that as the news gets out the discount should be higher.

As no company other than ASML has had its shipments stopped by this potential embargo the market likely is dismissing it. We would not.

We have already heard of the government digging into company records of who is shipping what to China. If the government is already sniffing around it suggests they are serious about making a “bad list” of companies and equipment and are not that far from implementation.

The first company to announce sanctions on them will open the floodgates.  ASML is far away in Europe with lots of European investors. When it hits Applied, KLA or Lam it will get serious, fast….


Semiconductor Recovery in 2020?

Semiconductor Recovery in 2020?
by Bill Jewell on 02-24-2020 at 6:00 am

GPD 2019 Semiconductors SemiWiki

Semiconductors down 12% in 2019
World Semiconductor Trades Statistics (WSTS) reported the world semiconductor market in 2019 was $412 billion, a 12.1% decline from $469 billion in 2018. Most of the decline was in the memory market (primarily DRAM and Flash) which was down a third from a year ago. However overall semiconductor demand was down for the year as the global economy slowed from 3.6% growth in 2018 to 2.9% growth in 2019, according to the International Monetary Fund (IMF).

We at Semiconductor Intelligence have been tracking the accuracy of semiconductor market forecasts from various sources for several years. We look at publicly available projections made late in the prior year or early in the forecast year before any WSTS monthly data for the forecast year is released (generally in early March). Based on these criteria we had a tie for most accurate forecast for 2019. Objective Analysis in December 2018 predicted the semiconductor market would decline 5% in 2019 primarily due to a weak memory market. Also in December 2018, Morgan Stanley called for a 5% decline in 2019. Thus, Objective Analysis and Morgan Stanley win our (virtual) forecasting prize for 2019. UBS was a close second with a projected 4.3% decline. Most forecasters expected low single-digit growth going into 2019. We at Semiconductor Intelligence in December 2018 projected growth of 4% for 2019.

Outlook for 2020 and 2021
The IMF January 2020 World Economic Outlook Update called for an acceleration of World GDP growth from 2.9% in 2019 to 3.3% in 2020 and 3.4% in 2021. The advanced economies are not expected to see any GDP acceleration, with 2020 and 2021 GDP growth of 1.6%, slightly lower than 1.7% in 2019. The acceleration will come from emerging and developing economies, with GDP growth picking up from 3.7% in 2019 to 4.4% in 2020 and 4.6% in 2021. Slowing growth in China will be more than offset by increasing growth rates in other regions, particularly India and the ASEAN-5.

U.S./China Trade and Brexit
Two key issues which have been major concerns for the global economy over the last couple of years have been at least partially resolved. The U.S. and China reached a Phase One trade deal which was signed January 15, 2020. Under the deal, the U.S. will lower tariffs on Chinese imports and China will purchase more U.S. agricultural goods and financial services. While many issues remain, the deal reduces the potential of a major U.S.-China trade war.

The United Kingdom (UK) officially withdrew from the European Union (EU) on January 31, 2020. This process, known as Brexit, will go through a transition period for the rest of the year to negotiate new trade agreements and other issues between the UK and the EU. Prior to the official exit, Brexit had been a question mark on the UK and EU economies.

Coronavirus
Just as U.S./China trade and Brexit are in the beginning phases of resolution, a new threat to global health and the economy has emerged. Reports of a deadly new respiratory virus began to emerge from Wuhan, China in December 2019. The novel coronavirus has been named COVID-19 the by the World Health Organization (WHO). WHO stated as of February 20 China has reported 74,675 cases and 2,121 deaths. Outside of China COVID-19 has 1,076 cases and 7 deaths.

COVID-19 is certainly a global health concern. The economic concern is how efforts to stop the spread of the virus will affect China production and demand. The IMF did not consider COVID-19 in its January 2020 economic update. The IMF has since stated the impact of COVID-19 is dependent on how quickly the virus is contained. The optimistic case is a quick resolution would result in a sharp drop in China’s GDP growth in 1Q 2020, but return to normal growth over the course of the year. The pessimistic case is a more severe outbreak would severely weaken the Chinese economy and disrupt global supply chains.

Some companies have taken measures to adjust to production issues in China. Nikkei Asian Review reported Samsung’s Galaxy smartphone factories in Vietnam are at full capacity as it is moving components from China. Apple issued a guidance update on February 17 stating it will not meet its March quarter guidance due to iPhone supply constraints and lower demand for its products in China. Digitimes disclosed Apple is considering moving manufacturing of some of its devices from China to Taiwan. Omdia (a new organization resulting from the merger of the research division of Informa Tech with IHS Markit’s technology research) projects COVID-19 will reduce smartphone demand in China, disrupt display panel manufacturing, and affect the Chinese games market. EETimes issued a detailed report on the impact of COVID-19 on technology companies.

Outlook for Semiconductors in 2020
The near-term outlook for the semiconductor market is uncertain due to the COVID-19 outbreak. Of the eight top companies providing 1Q 2020 revenue guidance, six expect a revenue decline versus 4Q 2019, ranging from -3% from Texas Instruments to -14.3% from STMicroelectronics. Qualcomm expects a 4.4% increase in revenue and Infineon expects 5.0%, however both these companies mentioned concerns over the coronavirus (COVID-19).

Semiconductor market forecasts for 2020 vary widely. Objective Analysis (co-winner of our 2019 forecast prize) expects 2020 change of “at best zero” based on continuing memory weakness. At the other extreme, Future Horizons is calling for a minimum of 10% growth, with 15% possible. Other forecasts are in the range of 3% to 8%. We at Semiconductor Intelligence have lowered of 2020 forecast from 10% in December 2019 to
7.0% based primarily on COVID-19 concerns.

The outlook for 2021 is slightly better than 2020. All four organizations providing 2021 forecasts see growth improving from 2020, ranging from 4.9% from the Cowan LRA Model to 9.6% from IHS Markit. Our Semiconductor Intelligence forecast for 2021 remains at the 8% we projected in December 2019. We are assuming COVID-19 is contained in the next few months and the fundamentals driving improved economic growth in 2021 remain in place.

Also Read:

CES 2020: still no flying cars

Semiconductor CapEx Warning

Electronics, COVID-19, and Ukraine


The COVID-19 Virus Outbreak and the Semiconductor Supply Chain

The COVID-19 Virus Outbreak and the Semiconductor Supply Chain
by Mark Dyson on 02-23-2020 at 10:00 am

The COVID 19 virus outbreak and the semiconductor supply chain

Welcome to my weekly roundup of the key semiconductor news from around the world from last week.  The COVID-19 virus outbreak and it’s impact on the semiconductor supply chain continues to dominate the news, but there was also lots of other news from around the world, so please read on.

Let’s start by a review of where the started at the end of January. This article from SEMI shows that through December into January there was a steady recovery in the global electronic supply chain, with both the SEMI equipment market showing growth and the global purchasing managers index moving into expansion territory in January.  But in late January COVID-19 began to make its negative impact felt, causing disruption to supply chains and shutting down factories around Wuhan, China and other electronics manufacturing centers.  The full impact of the COVID-19 outbreak has yet to show in the numbers and will only be seen in February and March sales numbers.

The impact of COVID-19 on the various semiconductor manufacturing segments is analysed in these 2 articles, one from ECNS in China and one from EETimes. They both paint the same picture reporting the wafer Fab sector doesn’t seem to be badly affected with most Fabs up and running.  This maybe partly due to the locations of the wafer fabs and also partly due to the level of automation used in manufacturing in the wafer sector.  The main impact is being seen downstream in assembly plants and other components manufacturing sites for optics and sensors where due to the labour intensive nature of the manufacturing and also the fact that a lot of these factories closed over Chinese New Year and so were not allowed to restart until Feb 10th or even later, and some are still pending approval from local governments to restart. Also even when they can restart, getting back the full workforce is still a big challenge with many people still quarantined, so for assembly and optical/sensor suppliers there is some impact, but this will only really be felt in the coming weeks.  Another sector impacting the supply chain is logistics of shipping product.  Despite the overall disruption the sentiment is that the sector will recover once the outbreak s over.

This week Apple issued a rare revenue warning that the March quarter would be lower than previous guidance due to the impact of COVID-19, however Apple did not give a revised guidance. There is also expected to be an impact to other Chinese phone companies like Huawei, Oppo and Xiaomi who mainly produce in China as well as suppliers like Foxconn.

Samsung has also been affected by COVID-19 as this weekend they announced that one coronavirus case had been confirmed at its mobile device factory complex in the southeastern city of Gumi, Korea, causing a shutdown of its entire facility there until Monday morning. The plant produces only a small proportion of Samsung’s phone with most production being done in Vietnam and India.

Away from COVID-19, Dialogue announced it will acquire Adesto for $500 million enterprise value ($12.55 per share in cash),.  Adesto was founded in 2006 and based in Santa Clara, is a  leading provider of innovative custom integrated circuits (ICs) and embedded systems for the Industrial Internet of Things (IIoT) market,  Adesto has approximately 270 employees.

The US continues in it’s plans to impose more restrictions on companies selling technology to Huawei.  This week the Pentagon dropped it’s opposition to the US Commerce Dept’s proposal to further tighten restrictions on selling American technology to Huawei, by tightening the rule from 20% to 10% content.

At the same time as law makers were planning to put in extra restrictions, the Commerce department announced that Huawei will get another 45 day reprieve from the original restriction by granting another temporary general license. This is the 4th extension to date, previously 3 90 day temporary general licences have been issued in May, Aug & Nov.

In addition Huawei has said it has secured more than 90 commercial 5G contracts worldwide, an increase of nearly 30 from last year despite the relentless pressure from U.S. authorities. In a press conference in London on Tuesday, Ryan Ding, president of Huawei’s carrier business group said “We have 91 commercial 5G contracts worldwide, including 47 from Europe,” and added that “One year ago, I said we are leading by 18 months ahead of our competitors in 5G technology. Now, we still maintain that leadership.”

Compound semiconductor substrate manufacturer, AXT, said that it’s Q4 revenue  dropped 17% yoy due to a drop in GaAs and Ge substrate sales.  For full-year 2019, reported revenue of $83.3m, down 18.7% compared to 2018.

Market research company Yole Développement expects the global 3D imaging and sensing market to expand from $5.0 billion in 2019 to $15.0 billion in 2025, at a 20 percent CAGR.  They expect  the 3D sensing main trend to switch from the front to the rear of phones with the adoption of ToF camera’s mass adoption. According to Yole’s 3D imaging & sensing report, rear attachment will surpass front attachment with market penetration rate reaching about 42 percent in 2025.

STMicroelectronics has announced a collaboration with TSMC to accelerate the development of Gallium Nitride (GaN) process technology and the supply of both discrete and integrated GaN devices to market. Through this partnership, ST’s GaN products will be manufactured using TSMC’s GaN process technology.

Finally a couple of articles about the semiconductor supply chain.  One article from SEMI is about building a healthy supply chain for critical subsystem components where the lack of alternative suppliers causes significant risks.

Another article is by UCLA Anderson where they reviewed pricing in the semiconductor industry and found that  in about 26% of transactions rather than getting volume discounts the manufacturer charged more for large quantities, and this shows how the supplier values production capacity when negotiating pricing.

That’s all for this week, if you enjoyed what you read, please do help to like and share my article so that others may also enjoy it.


Cryptocurrency Fraud Reached $4.3 Billion in 2019

Cryptocurrency Fraud Reached $4.3 Billion in 2019
by Matthew Rosenquist on 02-23-2020 at 6:00 am

Cryptocurrency Fraud Reached 4.3 Billion in 2019

Cryptocurrency fraud is aggressively on the rise and topped over $4 billion last year, according to the security tracking company Chainalysis.

This is especially shocking to those who thought they had found an incredible investment in the cryptocurrency world, yet were swindled out of everything. As part of these cryptocurrency scams, victims are lured into investing with the hype of significant returns. Once committed, they are often shown how their accounts are quickly accruing vast wealth, which encourages them to pour even more of their money into the con. The mirage eventually disappears, as does the money, when the operation shutters without notice and the swindlers vanish will all the deposits. Victims are left with the realization they were duped as part of an elaborate hoax and powerless to recover their money.

Chainalysis recently produced an industry report highlighting the scope of the problem. The organization specializes in helping businesses and governments understand illegal cryptocurrency transactions. The data showcases the rapid rise in 2019 of big Ponzi scams that represented the bulk of the losses. The top six of the large-scale scams were collectively responsible for about 90% of the fraud. It proves when cybercriminals find the right lure in the cryptocurrency community, such as a Ponzi style scam, the momentum quickly accelerates and draws more into the system, becoming massive in scale.

Fraudsters like cryptocurrency
Some of the beneficial attributes of cryptocurrency are being leveraged against those who aren’t mindful of the risks. Cryptocurrency has a reputation for a financial opportunity because of its history of volatile price swings, both high and low. Media has spotlighted many who have made considerable fortunes with meager beginnings. Scammers take advantage and reach out to this growing global community that desires fast riches, yet is very naïve with the risks.

The ability to transfer crypto tokens virtually, means they are everywhere but nowhere. Criminals understand this dichotomy and use it to their advantage. Once the money is in the hands of crooks, it begins a rapid journey across the digital landscape and into dark corners where it is hard to trace or impound.

Victims are often left with a total loss and little hope they will ever get any of their money back. For criminals, the potential of unimaginable gains, sometimes in the hundreds of millions of dollars or more, far exceeds the risk of being caught and prosecuted.

Privacy, Regulations, and Law Enforcement on the edge
Part of what makes these scams so attractive for cybercriminals to run is the ability to remain unidentified. The inherent anonymity of users is a challenge in the cryptocurrency world. Regulatory rules for Know Your Customer (KYC) and Anti-Money Laundering (AMC) are proliferating across legitimate exchanges and services, which greatly help identify fraudsters and increase accountability, but there is a lack of consistency and there are always workarounds. Other services promote their support for customer privacy and account anonymity, often finding loopholes or outright avoiding such requirements.

Many of these services are not intentionally malicious or fraudulent, but as part of their belief in the benefits of privacy, they are indirectly supporting potentially illicit activities. Overall, the vast majority of cryptocurrency transactions are legitimate and only a small minority of the overall transactions are tied to illegal activity.  But criminals will use whatever tools available to shield themselves from accountability and prosecution.

Many in the crypto community, who are doing nothing illegal, greatly value their privacy and anonymity. They are attracted to services that don’t require identification and keep their transactions confidential. There is a natural tension in the system that the growing community is still struggling with. I have spoken with many who are staunch advocates for their rights of privacy, in some cases even to the extent of being un-trackable by governments, yet show immediate regret and anger at those same entities when they lose money to a fraudster and have no recourse for justice. Still, some accept those risks as table stakes and prefer to remain anonymous.

Law enforcement is facing great difficulties adapting to digital crimes but is slowly getting better. For cryptocurrency, they work with experts to track transactions in public blockchains and collaborate with major exchanges to identify criminal activities and trace the flow of illicit funds. It is not easy and the growing number of victims makes it impossible to help even a fraction of those defrauded. The focus tends to be on big cases, like the multi-billion-dollar PlusToken Ponzi scam in 2019 where millions of users were told they could earn 10% a month on their investment. Ultimately, the criminals pulled in over $2 billion before it collapsed and the money is now gone. It has been digitally laundered and dispersed among thousands of anonymous accounts.

Although Chinese authorities were able to identify and apprehend 6 of the individuals behind the scheme, most crimes go unsolved. The chance of restitution for the PlusToken victims is almost non-existent.

The continued rise of the cryptocurrency market and ease in which to convince victims encourages the greed of fraudsters. Scams are getting more elaborate and convincing. Law enforcement is getting better but must face the evolving challenges of technology. More ‘privacy’ designed currencies are gaining momentum and will pose new hurdles to investigate and prosecute criminals, forcing authorities to continually adapt. In the meanwhile, people will be at risk. So far, using common sense in vetting investments is the best way to avoid cryptocurrency victimization.


Edge Computing – The Critical Middle Ground

Edge Computing – The Critical Middle Ground
by Mike Gianfagna on 02-21-2020 at 10:00 am

Computing hierarchy

Ron Lowman, product marketing manager at Synopsys, recently posted an interesting technical bulletin on the Synopsys website entitled How AI in Edge Computing Drives 5G and the IoT. There’s been a lot of discussion recently about the emerging processing hierarchy of edge devices (think cell phone or self-driving car), cloud computing (think Google, Amazon or Microsoft) and the newest middle ground in between (edge computing). The current deployment of 5G networks delivers the capability of creating much more data than ever before, and how and where that data will be processed makes the processing hierarchy even more important.

Some facts are in order.  First an observation from me – please don’t think 5G is for your cell phone. While your carrier will likely make a big deal about 5G reception, that isn’t the primary use of this technology; your cell phone is plenty fast enough now. 5G holds the promise of wirelessly linking many other data sources in a high bandwidth, low latency way. This is part of the promise of IoT. A quote from Ron’s piece helps drive home the point:

“By 2020, more than 50 billion smart devices will be connected worldwide. These devices will generate zettabytes (ZB) of data annually growing to more than 150 ZB by 2025.” (Data courtesy of Cisco.)

A little perspective is in order. A zettabyte is a billion terabytes, or 1,000,000,000,000,000,000,000 bytes if you prefer the long form. According to Wikipedia, in 2012 there was upwards of 1 zettabyte of data in existence in the world. So, a 150-fold increase, just from edge devices, is kind of daunting. Given this situation, when you consider the traditional model of IoT (edge device) data processing in the cloud, a few problems come up. Ron’s article provides this catalog of issues:

  1. 150ZB of data will create capacity issues if all processed in a small number of places
  2. Transmitting that much data from its location of origin to centralized data centers is quite costly, in terms of energy, bandwidth, and compute power
  3. Power consumption of storing, transmitting and analyzing data is enormous

Regarding the second point, Ron goes on to report that estimates project only 12% of current data is even analyzed by the companies that own it and only 3% of that data contributes to any meaningful outcomes. So, finding an effective way to reduce cost and waste is clearly needed. Edge computing holds great promise to deal with these issues by decentralizing the processing task – essentially bringing it closer to the data source. More benefits reported by Ron include:

  1. Enable network reliability as applications can continue to function during widespread network outages
  2. Potential security improvements by eliminating some threat profiles such as global data center denial of service (DoS) attacks
  3. Provide low latency for real-time use cases such as virtual reality arcades and mobile device video caching

The last point is quite important. Ron points out that “cutting latency will generate new services, enabling devices to provide many innovative applications in autonomous vehicles, gaming platforms, or challenging, fast-paced manufacturing environments.” While local processing can go a long way to reduce waste and cost, more efficient methods are also important. Ron points to AI as a critical enabler for this to happen.

With this backdrop, Ron explores various edge computing use cases, market segments and the impact all this will have on server system SoCs. One use case described by Ron centers on the Microsoft HoloLens. It’s a fascinating case study of augmented reality and its demands for low latency and low power. Ron then talks about the power, processing and latency requirements of the various edge computing segments. If you think there’s one edge computing scenario, think again. The piece concludes with a discussion of the impact all this will have on server system SoCs. AI accelerators are a key piece of this discussion.

If any of this gets your attention, I strongly recommend reading Ron’s complete technical bulletin.  There is a lot of compelling detail there regarding AI and the edge. For the chip designers out there, I’ll leave you with one excerpt from Ron’s piece that summarizes the challenges and opportunities of the next generation of edge computing.