RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

AMAT- Solid QTR & Great Guide- Share gains- Memory?

AMAT- Solid QTR & Great Guide- Share gains- Memory?
by Robert Maire on 08-16-2020 at 10:00 am

Applied Materials AMAT

Higher Foundry/logic exposure helps-
Little or no Covid or China trade impact-
Nice quarter but even better guide-

Applied reported revenues of $4.4B and NonGAAP EPS of $1.06, nicely above street estimates of $4.2B and $0.95 in EPS. Guidance is for revenues $4.6B +-$200M and EPS of $1.17+- $0.06, versus current expectations of revenues of $4.35B.

Management sees growth into H2
Most importantly the company sees improving demand and revenues in the second half of 2020 and an equal or higher 2021. While there is still a caveat about Covid and other headwinds it sounds as if the company is not planning on any fall off in demand. In fact, it sounds like the company is assuming that memory will come back stronger than it has been as we roll out DDR5.

Applied has industry high foundry/logic exposure
TSMC has long been the house that Applied help build. Applied’s relationship with TSMC has always been great and Applied has always had more than its fair share and the strength in foundry has obviously been very, very good to Applied.

China, likewise, is a place where Applied has been a long time leader and pioneer starting many years ago and it has paid great dividends as China has increased its spend over time.

Applied talked a lot about share gains but we are also sure that a good portion of the share gains come from the fact that Applieds best customers are the ones doing the spending versus the memory makers (Lam’s home turf).

Share gains
Management talked about the obvious outperformance in foundry/logic but also pointed to a 25% increase in DRAM “patterning share”. They also claimed to have the highest growth rate in the industry in process diagnostics (read that as a swipe at KLA, the leader in process control). Metal etch was also a special mention at a growth of 30%, with over 5000 chambers installed. Quite an accomplishment.

Covid & China Trade = No Worries
It would be hard to tell from the conference call that we are in the middle of a pandemic let alone an escalating trade war with China. China trade restrictions did even get a passing mention and Covid sounds like its over and done with as far as Applied is concerned as the company has moved to “a new way of working”.  This is much the same as we heard from both Lam and KLA which also saw no ongoing Covid or China impact. ASML seems to have the only China impact we know of.

Kokusai or not
Applied remained very upbeat about the Kokusai acquisition going through even though its months late. The company said that it had all approvals except for one which we assume to be China, which remains a wild card. China will likely hold approval ransom which may get even worse given the increasing tension with the US. The only, but major, mitigating factor is the fact that Applied continues to sell China a boatload of critical semiconductor equipment to help them get to dominance in the industry.

We assume that China won’t screw one of its main arms suppliers in this technology war with the US. If the worse happened and the deal fell apart, our view is that it wouldn’t be a huge loss as Applied overpaid for a low tech asset just to buy business.

The Stock
Applied’s stock was bit weak during the trading day but made it all back in after hours after the positive news. We would point out that most of the good news was already telegraphed by Lam and KLA and is already in the stock. Even though the company has a positive outlook the Covid and general economic risk still exists. Semiconductor stocks have been on fire and could use a little rest here. Although the Applied news was good, it doesn’t motivate us to go out and buy the stock at current levels nor does it motivate us to buy related stocks.


Uber: Dara’s Distracted Driving

Uber: Dara’s Distracted Driving
by Roger C. Lanctot on 08-16-2020 at 6:00 am

Uber Dara Distracted Driving

Uber has taken a highly profitable business and turned it into a very unprofitable and dangerous one with the help of the pandemic. With guile, innovation, and theft, Uber founder and pitchman, Travis Kalanick, spun up the ride hailing wonder into a global transportation leader built upon rapid growth and a bare knuckle approach to competitors and regulators.

The freewheeling ways of Kalanick caught up with him and he was replaced by the more sedate, though no less clever, Dara Khosrowshahi. With fast growth in the rearview mirror at Uber, Khosrowshahi has been forced to turn to distraction and misdirection to simultaneously satisfy drivers, regulators, and investors.

Uber investor presentation: https://s23.q4cdn.com/407969754/files/doc_financials/2020/q2/InvestorPresentation_2020_Q2.pdf

This post-pandemic chapter of Uber’s existence has seen the company’s business model whipsawed – simultaneously validated and vanquished – as drivers and their cars vanished along with demand for ride hailing services. The ride hailing business plunged 70%-80% around the world suddenly thrusting Uber into the uncomfortable position of no longer being able to deliver relentless growth.

To compensate, Uber embraced food delivery doubling down on its up and coming Uber Eats operation and acquiring rival Postmates. The move allowed Uber to perform an impressive pirouette shifting its business mix nearly overnight from majority people delivery to majority food delivery.

Khosrowshahi put on a distracting show for investors last week with a 57-slide presentation touting the merits of food delivery for restoring rapid growth to Uber’s prospective revenue even as the company enters a second highly competitive space. The key difference, though, between food delivery and people delivery is the uncertain prospect for profits.

Not only are margins tighter – if they exist at all – in food delivery, competition in the sector globally is already intense.  There is no question that a pivot to food delivery made sense and makes sense in the context of rolling economic lockdowns around the world in the face of the coronavirus pandemic, but the associated profit margins remain unproven.

Investors clearly bought Khoshrowshahi’s pitch as Uber’s stock survived last week’s earnings report fairly unscathed. But Khoshrowshahi followed up that pitch with an opinion piece in the New York Times this week simultaneously arguing for and against treating its drivers as employees by offering them benefits.

Khosrowshahi’s NYTimes commentary speaks to the fallacy underlying Uber’s business model. Uber famously doesn’t own any cars and uses drivers that are not employees. This is the key to Uber’s – and other ride hailing operator’s – profitability – limited capital investment and limited obligations to drivers.

The model was validated during lockdown because drivers vanished at the same time as passengers thereby immediately reducing expenses – though also destroying revenue. The pandemic arrived in the midst of Uber’s regulatory battles with the U.S. states of New York and California, both of which have been seeking to force the company to treat its drivers as employees. (Something similar is playing out, still, in London, as well.)

But freedom and flexibility are the foundations of Uber. Khosrowshahi argues in the New York Times that it offers its gig workers that which they value most highly: maximum flexibility. If the company were to treat its drivers as employees they not only would lose that flexibility, but Uber wouldn’t be able to employ as many of them. And without that famous flexibility, it would be more difficult to recruit drivers.

If Uber offers any benefits at all to its contractor drivers, writes Khosrowshahi, they legally become, in many if not most states, employees with a range of legal obligations that undermine the otherwise highly profitable business model. Khosrowshahi argues for the creation of a third path forward whereby the company could offer gig workers some benefits, such as a contribution to pay for healthcare, without triggering full employment obligations and regulations.

In the end, Khosrowshahi’s “third way” – as he describes it – crafted in response to a critical editorial published by the New York Times – is yet another distraction. Khosrowshahi says that if such a system had been in place the company would have contributed $665M toward driver benefits last year alone.

The argument is a distraction because Khosrowshahi maintains that most drivers don’t want these benefits. Most already have healthcare coverage from other sources and they want to preserve their flexibility.

Dara Khosrowshahi’s distractions mask a bigger fundamental problem for Uber and other ride hailing operators that function in a similar manner. The company is founded on maximum freedom and flexibility and minimum accountability and reliability.

It is only recently that Uber has begun advertising its service to consumers. Previously, Uber almost exclusively advertised in order to recruit drivers. There is a reason for this. Uber cannot deliver a reliable and predictable level of service because of its maximum freedom and flexibility ethos.

There are no protocols around how passengers are treated. In fact, Uber passengers and drivers both routinely report abusive interactions. Uber’s freewheeling approach has ultimately undermined its ability to commit to any quality of service commitment.

In a post-coronavirus world, this max-flex strategy means drivers and passengers are supposed to wear masks – but do not always do so. Worse, Uber has failed to require drivers to install partitions between the front seat and passenger compartments of their cars.

There’s a reason Uber’s ride hailing revenue remains well below pre-COVID-19 levels. It is simply unsafe to hail and ride in an Uber without driver and passenger-protecting partitions. Uber may require masks, but masks are insufficient for safe delivery of people in the confined space of an automobile.

The truly strange thing about Uber, though, is the willingness and ability of the company to throw around big figures like $665M for driver benefits or $8B in the bank without defining a pro-active strategy for making driving safer and treating drivers better. Clearly Uber has the resources to do the right thing. Instead, CEO Khosrowshahi is punting the ball to regulators to find a “third way” solution to the company’s contractor-employee conundrum.

Khosrowshahi’s distractions and disingenuousness are unsafe at any speed. Food delivery will not deliver enough growth or profit to save Uber. The company must fix its fundamentals or, in the long run, it will fail.

New York Times – “I am the CEO of Uber. Gig Workers Deserve Better” – https://www.nytimes.com/2020/08/10/opinion/uber-ceo-dara-khosrowshahi-gig-workers-deserve-better.html

Rideshareguy Interview with Dara Khosrowshahi: https://www.youtube.com/watch?v=rSiNJCwbcEk


NetApp: Comprehensive Support for Moving Your EDA Flow to the Cloud

NetApp: Comprehensive Support for Moving Your EDA Flow to the Cloud
by Mike Gianfagna on 08-14-2020 at 10:00 am

Background copy

With this post, we welcome NetApp to the SemiWiki family. NetApp was founded in 1992 with a focus on data storage solutions. Initial market segments were high-performance computing (HPC) and EDA and their first customers were EDA and semiconductor companies. NetApp has become a primary force in on-premise data management for the EDA industry. Their name should be familiar to most SemiWiki readers. Whether you know it or not, you’ve probably used NetApp technology to manage and/or recover data on your projects. Building on their solid on-premise data management accomplishments, NetApp now offers comprehensive support for moving your EDA flow to the cloud

Chip design teams are connecting on-premise flows with to the cloud, creating hybrid multi-cloud architectures. Since NetApp has been providing on-premise EDA data management for over 20 years, they are an excellent partner to support this migration and have become a leader in this area.

I got the chance recently to chat with Scott Jacobson, EDA Solutions Strategist at NetApp. Scott is part of a vertical market team that focuses on the unique requirements of EDA. Scott described an “EDA-first” strategy at NetApp, where the special requirements of chip design are part of the NetApp integrated approach. He talked about the specific HPC demands of tasks such as verification, place and route and even DFM-related processes such as optical proximity correction (OPC).

I know something about OPC having run a company in that space and I can tell you it’s a particularly nasty problem from a compute and data management point of view. Scott’s perspective is that EDA workflows are unique, with some processes managing metadata stored in millions of files and others, such as layout data increasing exponentially in size as designers migrate to advanced nodes.

EDA in the Cloud – Self-Managed to Fully Managed

Scott explained that NetApp’s cloud strategy for chip design is cloud vendor neutral. The company is EDA tool agnostic as well, meaning to the extent possible they will support all options. NetApp’s core storage operating system is called ONTAP and it runs on both on-premise hardware and in the cloud.  This capability can be used to develop a self-managed hybrid cloud/on premise environment with low effort. This is how most organizations start their cloud journey. In their words, ONTAP allows users to accomplish the following with their existing infrastructure and staff:

  • Simplify the transition to a cloud-ready data center
  • Modernize infrastructure with flash and cloud, without creating silos
  • Deploy emerging applications with enterprise-grade data services
  • Radically change the economics of your data center
  • Move data to run optimally at the edge, in the core, or in the cloud
  • Use one set of tools to manage and protect data, wherever the data is

I’ve personally lived through a massive chip design cloud migration and I can tell you NetApp’s comprehensive support for moving your EDA flow to the cloud will help a lot. Scott explained that ONTAP offers proven storage efficiency (this is NetApp after all), simple application provisioning, automatic tiering in the cloud and consistent efficiency (with written guarantees).

NetApp also supports the storage infrastructure for managed services from Azure, AWS and Google Cloud. This allows organizations to scale gracefully from a hybrid starting point to a fully managed environment from the cloud supplier of their choice. This broad support and flexibility are quite unique.

The NetApp approach for hybrid cloud integration delivers excellent scalability, with increased availability, lower time to deployment and reduced costs as shown below.

EDA in the Cloud – Built-in Data Protection & Security

Every chip design contains someone’s “crown jewels”. Typically, many such instances from IP providers, foundries and of course the original design work of the team. Moving from an on-premise environment to the cloud can be jarring from a security point of view. The process and tools to implement security and data continuity are known and understood for an on-premise implementation. Once in the cloud, there is a sense of loss of control.

NetApp has clearly thought a lot about these items and offers a comprehensive infrastructure to lower the stress level. From a data continuity perspective, NetApp offers:

  • Synchronous mirroring
  • Zero planned and unplanned application downtime
  • Zero data loss
  • Set-it-once simplicity
  • Zero change management
  • Hypervisor and application integration

The result is continuous data availability with built-in back-up and recovery protection against hardware failures. Security is also well thought out. Highlights of their security protocols include:

  • Secure management of multiple tenants
  • Multifactor Authentication (MFA)
  • Role-based Access Control (RBAC)
  • Data-at-rest encryption with FIPS certification
  • In-transit data encryption for back-up and DR
  • Onboard and external key management
  • Secure purge to help clean up data spills and meet GDPR compliance
  • Comprehensive logging & auditing

NetApp also helps meet governance and compliance requirements with something call SnapLock. Features include:

  • Write once, read many (WORM) data retention solution
  • SnapLock Compliance certified to meet strict regulatory requirements
    • SEC 17a-4, HIPAA, DACH, Commodity Futures Trading Commission (CFTC) Rule 1.31(b)
  • SnapLock Enterprise enables organizations to meet legal requirements plus protect against ransomware attacks and deletion of critical data
  • License-based feature of ONTAP that works with application software
    • Can be used in a cluster with both SnapLock data and standard data
    • Support for file workloads (NFS and SMB/CIFS)

You can find more details about NetApp EDA support here. The website contains a lot of good information, including a success story from Mellanox and a press release from Synopsys and Google Cloud. They also recently published an eBook entitled Cloud Data Storage: The Promise and The Challenges. All aspects of cloud migration are covered in this book. If you’re contemplating a move the cloud, I would consider it required reading. You can get a copy of the NetApp eBook here.

That’s it for now.  I hope you found this introduction to NetApp useful. Going forward, we’ll be covering more topics on effective data management strategies from NetApp.

Also Read:

NetApp Simplifies Cloud Bursting EDA workloads

NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry

NetApp’s FlexGroup Volumes – A Game Changer for EDA Workflows


CEO Interview: Isabelle Geday of Magillem

CEO Interview: Isabelle Geday of Magillem
by Daniel Nenni on 08-14-2020 at 6:00 am

isabelle geday

Isabelle Geday is the Founder and CEO of Magillem Design Services, headquartered in Paris, France. Isabelle has over 40 years of experience creating innovative platform solutions, in various industries such as oil, telecommunications, IT and EDA. She has a proven track record in managing startup companies internationally, leading them to growth, sustainability and profitability. Isabelle has led strategic programs across product management, platform software, mobile device engineering in an international arena, improving product competitiveness, reliability and product profitability while building a loyal set of customers. Magillem customers now include 18 of the 20 largest semiconductor companies.

Isabelle is a graduate from Ecole Nationale Superieur d’Ingénieurs en Informatique d’Entreprise. She completed her post-graduation in Microprocessors Architecture from CNAM and has received the Certificate from IFA in Board Administration. She also attended the School of Art of Le Louvre.

As a company, Magillem has done reasonably well, figuring prominently as an EDA vendor in most of the top 20 semiconductor companies. What makes Magillem so successful?

Let me start by giving a brief history about Magillem. It was founded by five engineers and myself in 2006 with a dream to build an enterprise SoC design platform aimed at tying together the basic design elements namely Specification, Design, and Documentation. We wanted to use IP-XACT (IEEE 1685), since it was an open standard and based on XML as it allowed specifying, programming and documenting with one software platform. Our goal was to shorten the development cycle for SoC/IP designs by enabling design teams to reuse IPs and have a single source of input which could automatically generate the desired outputs in various formats such as RTL (Verilog, VHDL, System Verilog), System C, UVM, RAL models, documentation etc. By providing design teams an easy way by which they could automate their custom design flows, manage design complexity, interoperability and re-use, our tools ensured software, hardware and verification engineers were in-sync and collaborated efficiently. These days, given the current pandemic, design collaboration and automation are becoming even more critical for tapeout success.

At heart, we remain an engineering firm with a passion for solving problems. Over the years, our solutions have tried to address the problems faced by design teams from a flow perspective identifying areas in the SoC design flow where the outputs can be automatically generated in a number of desired formats from a single input source. This enables all members of the hardware, software and verification teams to be more productive.

Some of the reasons behind why 18 of the top 20 world leaders in semiconductor and microelectronics are using our solutions include:

  • Customer focus: We obsess over customer problems trying to solve their pain points and striving to provide them with the best possible support. We never stop asking our customers and partners about their needs, so as not to lose sight of their evolutions and paradigm shifts.
  • IP-XACT: By using this as an underlying data-model, we offer reassurance to our customers that their design data is not held hostage to a proprietary format. We also provide them with a number of means by which they can easily customize our tools for their design flows.
  • Our integrated design platform offers a non-disruptive framework, providing fluidity, flexibility and seamless execution of the entire flow, providing better control to designers.
  • We have always taken a long-term perspective when engaging with potential customers, and have been transparent and honest in our relations. This has helped ensure customer loyalty. In fact, ST Microelectronics, our first customer in 2006, still continues to be a customer. We are also a public company, which inspires confidence among our customers.

Where does Magillem fit into the SoC design flow?

Our software platform is architected to meet the unique requirements and flows of SoC design teams while providing them seamless design solutions from architectural concept to physical implementation. Our products are used by a number of teams in the SoC design flow. These include the design specification/architects, RTL designers, SoC integrators, verification and validation teams, the physical implementation team and tech pubs team. From a physical implementation standpoint, our tools are used to modify the logical hierarchy of the design and map the logical hierarchy to the physical hierarchy.

Magillem is known as “The IP-XACT company”. With the growing emphasis on IP reuse, what is Magillem’s perspective on design reuse and IP Management?

The IP-XACT consortium was formed with the intent of improving the exchange of IPs between companies and increase reusability. However, the notion of IPs and reusability has evolved over the last decade. In the SoC world, the term IP is no longer restricted to merely silicon IPs but has been extended to include specifications, scripts, flows and documents as well. For example, a memory map specification developed is an IP by itself for a design platform. It should be reusable across several versions or generations of the design and generate the desired RTL and documentation automatically. IP-XACT, being extremely versatile and equipped with Tight Generator Interfaces (TGI) serves as an ideal data-model for IP reuse and provides companies with an ability to manage and customize IPs easily while generating outputs catered to their needs.

Going forward, we believe that reusability will be the new mantra, not just for the silicon IPs but also for the associated collateral such as scripts, flows, ideas etc. Companies will strive to strike a balance between new and legacy IPs and will look for new ways of managing all their IPs. IP management will take a whole new meaning as design teams within a company will look at ways to seamlessly add their IPs to the IP library as well as extrapolate key metrics from their design database to determine whether the IP meets their current design requirements.

IP-XACT has grown in popularity in the recent years with a number of companies using it for a diverse set of solutions such as IP reuse, hardware-software interface, SoC integration, verification, traceability etc. How do you envision IP-XACT’s role in SoC designs in the years to come?

IP-XACT has been around for the past 16 years and is gaining quite a bit of momentum in recent years with the next major changes to the standard coming up next year. In the last decade or so, the semiconductor industry as a whole is witnessing a change and embracing open standards and software. Advanced Interface Bus (AIB), IP-XACT (IEEE1685), Git are some examples. One of the primary reasons behind the departure from proprietary technologies is the cost associated with constantly having to modify the data-model to accommodate changes to the flow, scripts or generators. The flexibility of IP-XACT along with TGI to accommodate the necessary customizations required by design companies makes it a compelling reason for companies to adopt it. With a growing number of member companies in the IP-XACT consortium coupled with the increased adoption, it is but natural for the IP-XACT standard to continue to evolve. I expect it to be the cornerstone of successful design flows and help shorten the design cycles by enabling designers to be more productive.

Magillem provides a number of solutions to enable design teams to successfully develop their SoCs faster. However, none of its competition provides anywhere close to the entire range of solutions which Magillem provides. Any comments?

I don’t want to comment about our competition as such. At Magillem, we have never believed in selling piecemeal solutions as it has limited value. There is more power in offering a software platform which provides designers with a complete solution and brings more value to the user. But developing a software platform for the entire SoC flow is never easy. When we started Magillem, we had the vision to integrate specifications, designs, documentation and automation for the entire SoC design cycle. Our mission was to streamline design flows for the benefit of all teams involved and shorten the design development cycle. Committing to a vision such as this, enabled us to plan better, employ people with the right skills and work in partnership with the right companies to develop our software solutions.

There are several factors which have helped us reach where we are. First and foremost, the credit for this goes to our team. Throughout our history, I have believed in investing in the people who work for us and motivating them. For me, our employees come first. In the current pandemic we are in, I want them to be safe first and be comfortable working from home. Our rather low attrition rate also speaks volumes of our work environment. With motivated employees, it becomes easier to develop better solutions.

Our customers are another factor which has helped us be where we are. We have worked hard with them to ensure their problems are solved. This has been the breeding ground for new ideas as well as the driver to ensure our tools are mature and can successfully fit into their custom design flows. Needless to say, we also support our customers very well and remain committed to helping them be successful.

Last but not the least, we have never believed in using proprietary data models as there is always a big downside whenever enhancements have to be made. We believe in transparency and hence our commitment to using IP-XACT, an IEEE standard as the data-model of choice.

Aside from the big four EDA companies, Magillem is the only other EDA company listed on a stock exchange. You are currently also the only woman CEO of an EDA company. What do you foresee for Magillem’s business over the next 3 to 5 years considering the current business climate?

“Faced with crisis, the man of character falls back on himself. He imposes his own stamp of action, takes responsibility for it, makes it his own.” Charles de Gaulle

I strongly believe in this quote. In 2009 during the financial crisis, when everyone was cautioning us against going public, we still went ahead with it since we believed it was the time to get funded and invest. And we have not regretted our decision. In fact this move has enabled us to earn credibility with our customers. In the current economy, business will no doubt be tough. We will have to work hard to find new opportunities. Our enthusiasm will help us open new doors, and we have chosen, once again, to impulse a strong dynamic, keep hiring new resources and move forward with energy.

I strongly believe in our company and the people who work there. We are also financially well placed unlike our competition. New opportunities in bio-medical, defense, AI, Automotive are beckoning. In the next couple of years, we will focus on upcoming industries and invest in them while retaining what we already have. A few years from now, I assure you that you will see a Magillem which has weathered the storm and grown.

Also Read:

CEO Interview: Ted Tewksbury of Eta Compute

CEO Interview: Ljubisa Bajic of Tenstorrent

CEO Interview: Anupam Bakshi of Agnisys


Is a US Semiconductor Manufacturing Revival on the Way?

Is a US Semiconductor Manufacturing Revival on the Way?
by Anish Tolia on 08-13-2020 at 9:00 am

Is a US Semiconductor Manufacturing Revival on the Way

Two bits of recent news has excited people in the semiconductor manufacturing space. First TSMC (bit.ly/384joVr) announced their intention to invest $12 B dollars in a Fab in Arizona. Then came the Corona-driven bipartisan proposed $23B federal government investment in semiconductor manufacturing (nyti.ms/2YZzFqnl). The question is what is the potential impact of this and can it meaningfully change the semiconductor fab landscape?

A very brief History of Semiconductor Manufacturing

The US was the birthplace of the semiconductor from the invention of the transistor by Shockly in 1940s giving rise to the early leaders like Fairchild Semiconductor which in turn spawned legendary companies like Intel. In the days when every semiconductor company made their own devices and fabs cost a few million dollars, manufacturing sites mushroomed giving Silicon Valley its name.

In the 80s, Japan started investing heavily in fabs and the US started to lose their leadership in the industry. Japanese applies their quality and manufacturing prowess to dominate segments like memory chips, pushing Intel to move from memory chips to microprocessors in the mid 80s.  However, the bigger shift happened in the 90s through the advent of foundries and fabless design companies. Led by companies like TSMC and UMC they caused many of the smaller scale fabs to be no longer cost effective. At the same time Korean Chaebols like Samsung started investing heavily in manufacturing and now two Korean companies Samsung and Hynix dominate the memory chip market. US now manufactures less than 12% of the world’s IC chips.

In 2014 China decided that Semiconductor manufacturing is a matter of national security and began implementing an ambitious “Big Fund” $20B+ initiative to move China from a minor player with less than 10% market share to a global giant taking on the Koreans, Taiwanese and Americans. In 2019 they announced a “Big Fund II” that was double the size of the previous fund. On top of that local government entities invested their own funds in local projects and state-owned banks make cheap capital available. Chinas IC output is now around the same as the US at around 12% of global production.

Chip Manufacturing in America Today

After a wave of consolidations and outsourcing most American companies have exited the manufacturing race. There are only three major companies left: Intel, Micron and Global Foundries (GF). GF is not even a truly American company being privately owned by Mubadala, the sovereign wealth fund of the UAE.  Each has its own niche and do not compete with each other. Intel is an IDM (Integrate Device Manufacturer) and produces microprocessors for PCs and Servers. Micron produce memory chips (NAND and DRAM) and Global Foundries, as its name suggests, is a contract manufacturer for fabless chip companies. But semiconductor manufacturing is a global business and all the three companies operate global fabs. Intel routinely rotates new fabs between the US, Ireland and Israel. Microns operates fabs in Japan and Taiwan and latest large investment was in Singapore. GF has plants in Germany and Singapore as well.

Let’s take a look at the key elements involved in setting a state-of-the-art Semiconductor Fab.

1.  Sustained Capital Investment:  A state of the art fab now costs north of $10B. The total Capex of Semiconductor manufacturers in 2019 totaled $102B. And in order to stay relevant to Moore’s Law, companies must bring to market new technology nodes every 2 years. In this scale a one-time boost of $23B doesn’t change the game for the US. There is also a question of how the subsidies are structured. In the past, for example in the Solar industry, the US model has been some kind of complicated structure using the tax code via credits, tax breaks etc. This is an inefficient way to deploy capital.

2.  Engineering Talent: US Universities produce a large number of engineering graduates and the US remains a desirable destination for world talent (not withstanding recent moves by the administration to change that). Staffing new fabs should not be an issue.

3.  Infrastructure: Land, A stable power grid, water supply, transportation infrastructure are all essential. From this perspective the US being a developed country is not lacking.

4.  Supply Chain: The supply chain for Semiconductors is as global as the end product. It is very difficult if not impossible for a country to be fully self-sufficient in manufacturing.  The US is well placed for some key building blocks like production equipment with the largest suppliers AMAT, LAM and KLA being US companies. But even there the Dutch company ASML holds a monopoly on the critical photo lithography steps. On the materials side it’s even more difficult as the manufacturing process uses hundreds of chemicals and gases, many of which have to be sourced from Asia. So true end to end self-sufficiency for any country is pipe dream.

Who will build the new Fabs in the US?

Intel has historically continued aggressive technology development and fab builds. With some additional incentives it is likely they would choose US sites rather than Ireland or Israel for future investments. Not surprisingly Intel has been lobbying strongly for the US government support of the industry. However even Intel has in the past tried the foundry model and not made any progress. And in recent announcements citing more delays in their 7nm process, even indicated their willingness to use outside partners for manufacturing (bit.ly/2DHVzrq)

Micron has been growing more via acquisition of Japanese and Taiwanese companies and expanding in Singapore. Memory chips are a commodity product and highly costs sensitive. It is unlikely Micron would build another large-scale US fab any time soon.

Global Foundries has exited the race for the latest process nodes and is focusing on some more niche technologies and applications like FDSOI. It is also in financial trouble and only being kept afloat by the Mubadala group. It is very unlikely to see a new leading edge fab by them anywhere in the world in the next few years.

Global Players: The other top global players (leaving aside Chinese companies) are TSMC, Samsung and Hynix. Samsung operates two fabs in Austin and it is conceivable that they would expand there if there were sufficient incentive to do so. Hynix has no history of operating in the US (they do operate a large fab in China) and is unlikely to start now.

TSMC has dabbled with the idea of a US fab for a while but has never pulled the trigger. TSMC also is greatly favored by the Taiwan government as a national jewel and is given plenty of reasons to continue expanding in Taiwan. Taiwan is small island and TSMC can effectively rotate their key people between sites very easily. In spite of the recent announcement of the Arizona investment the advantages for TSMC of a fab in the US are hard to see, other than political. Much like the ballyhooed Foxconn plant in Wisconsin that didn’t happen, it could be a political hedge against future tariffs. It could be a nod to Apple to move their supply chain to the US as well. But from a pure operational and financial basis, there is no seeming advantage to the move.

Can the US learn from the China model?

Compared to that China typically funnels money via direct equity investments or loans through state banks which greatly accelerates the speed of putting up the new Fabs. Chinese companies also take a long view of an industry and are willing to operate at losses for several years. For example, they completely dominate Solar manufacturing (90% of solar cells are made in China) and have become a major player in display manufacturing stealing share from the Koreans and Japanese.

The US must show similar long-term commitment to Federal and State support. But this is very difficult in our political system. Again, taking Solar as an example, various combinations of Federal tax credits and State incentives were used to jump start the industry. And this worked on the installation front where cost/watt kept dropping. But the US is a minor player in the manufacturing of the solar cells and modules themselves.

And finally, since only the top 5 IC makers have the wherewithal to make leading generation fabs, any policy must be aligned with the strategic objectives of these companies. Some of that could be done through punitive measures like tariffs and trade actions but this is rarely a sustainable in the global economy.

In the end it probably makes more sense to focus on a small subset of critical chips that have true national security implications and continue to fund them as necessary. But a large scale shift of IC manufacturing back to the US doesn’t seem a likely scenario.

What do you think? Comments, debate and discussion welcome!

Anish Tolia, Ph.D
Global Marketing Executive/Consultant


Cadence Increases Verification Efficiency up to 5X with Xcelium ML

Cadence Increases Verification Efficiency up to 5X with Xcelium ML
by Mike Gianfagna on 08-13-2020 at 6:00 am

Screen Shot 2020 08 07 at 11.24.49 PM

SoC verification has always been an interesting topic for me. Having worked at companies like Zycad that offered hardware accelerators for logic and fault simulation, the concept of reducing the time needed to verify a complex SoC has occupied a lot of my thoughts. The bar we always tried to clear was actually simple to articulate – verification is over when you run out of time. The goal was to make the verification process converge BEFORE you ran out of time. A recent announcement from Cadence got my attention due to the headline that Cadence Increases Verification Efficiency up to 5X with Xcelium ML.

I had a chance to chat with Paul Cunningham, corporate vice president and general manager of the System & Verification Group at Cadence about what this announcement really means to verification engineers. You may recall Paul provided a great overview on machine learning at Cadence  – inside, outside and everywhere else. So, I saw this as an opportunity to get the real scoop about the announcement.

The backstory is quite interesting. At a high level, the way to think of this machine learning (ML) enhancement to Xcelium is that it now delivers the same test coverage results in up to one fifth the time. That time compression has some significant implications. Before we go there, I explored how Cadence increases verification efficiency up to 5X with Xcelium ML.

This all starts with constrained random tests for simulation through a language like SystemVerilog. By ordering the tests correctly, you can get some improvement in simulation throughput. While this approach helps, verification compute still has a tendency to explode. How can you manage this? It turns out the answer is quite complex.

The core focus of the new Xcelium ML is to examine the circuit, identify the relationship between input stimulus and design or functional coverage. Then develop randomized vectors that hit coverage points much more efficiently. This requires a lot of work on the core algorithms of Xcelium. Paul explained that is took a few tries to get it right, but once an approach was identified the resultant reduction in simulation time was very significant.

Paul explained that the machine learning problem being addressed requires both an analysis of the input data set (the network) and the stimulus for that network. A “standard” ML problem starts with a known input (e.g., a picture of a cat) and the goal is to analyze and recognize the input. The Xcelium problem requires analysis of the input data set as well as generation of targeted vectors. Without analysis and optimization of the input data, the size of the ML problem becomes the same size as the design itself, and ML inference becomes so slow you lose any performance benefits of the approach. This dual analysis task is a much harder problem. Paul described the work as “true innovation”.

So, what do you do when you can achieve the same coverage in one fifth the time? The answer is actually quite straightforward – you spend 80 percent of the time you recover finding new bugs in your design.  This is great news for the verification engineer.

Finding problems earlier is always a win. Anyone who works in EDA knows this intuitively, but finding data to back it up is sometimes difficult. Cadence has found a very useful analysis of this effect from Intrinsix. Here is the graphic. It shines a light on what we all know – finding and fixing bugs earlier in the process is much more cost-effective.

The innovation Cadence has delivered is organic in nature.  A bit of genealogy will help explain this:

  • NCSim is introduced around 2000
  • Incisive adds constrained random, SystemVerilog and UVM
  • Xcelium adds multi-core capability from the Rocketick acquisition, high-performance, low-power SystemVerilog capability, incremental compile and save/restart support
  • Xcelium ML adds machine learning optimization for efficient randomized vector generation

So, Xcelium ML is built on a series of organically developed innovations. The product will be generally available in Q4, 2020. Adoption of Xcelium is quite strong. According to the Cadence earnings call:

“Our Xcelium simulator has been steadily proliferating, with multiple migrations from competitive simulators underway.”

If you want to experience a 5X “time warp” in your next verification cycle, you should definitely check out Xcelium ML. You can learn more about the Xcelium simulator here.

Also Read

Structural CDC Analysis Signoff? Think Again.

Cadence on Automotive Safety: Without Security, There is no Safety

DAC Panel: Cadence Weighs in on AI for EDA, What Applications, Where’s the Data?


RISC-V SDKs, from IP Vendor or a Third Party?

RISC-V SDKs, from IP Vendor or a Third Party?
by Bernard Murphy on 08-12-2020 at 6:00 am

RISC V min

Like many of us, I’m a fan of open-source solutions. They provide common platforms for common product evolution, avoiding a lot of unnecessary wheel re-invention, over and over again. Linux, TensorFlow, Apache projects, etc., etc. More recently the theme moved into hardware with OpenCores and now the RISC-V ISA. All good stuff. Then I remember Richard Stallman’s comment about “free as in free speech, not free beer” and I realize that markets around these areas don’t disappear, they just morph into providing value on top of those platforms, especially in delivering commercial-grade equivalents. Such as a third-Party RISC-V SDK.

The siren song of “free”

We can still build software especially from scratch in our home office ventures, but that’s generally a lot less easy than it sounds. Not to mention a major distraction from whatever we thought the goal of our brave new venture was going to be. It’s great to have no license fee overheads, not so great when your customers are finding basic bugs or security bugs in your product. Then it’s nice to have the assurance of commercial-grade toolchains or IPs underpinning your development. Letting those suppliers worry about staying ahead of bugs, enhancements, CWEs and the like.

Build your own RISC-V SDKs?

Mentor recently released a white paper on this topic, highlighting the advantages of their RISC-V SDK over build-your-own (BYO) or RISC-V supplier implementations. BYO is an easy option to dismiss. For either GCC or LLVM compilers, you can’t just compile and link the source. You have to run the project test suites through a test framework. Fixes both to code and to the tests are coming in constantly from the open-source community. Some tests are expected to fail, you have to deal with those. You figure out what to do with unexpected fails, rarely simple. You have to incorporate security patches, usually into libraries, based on the latest and greatest CWE discoveries and patches. Fixes come in according to agreements within the community, on whatever schedules they can deliver. If a customer is screaming at you, you’ll have to figure out your own fix and re-regress. Then maybe swap in the official fix when that appears. A constantly evolving problem. Acceptable maybe if you’re going to make money selling SDKs, not otherwise.

What about the IP vendor RISC-V SDKs?

Why wouldn’t SDKs from RISC-V hardware suppliers be good enough? The white paper doesn’t elaborate on this point, but I’ll take a stab. I’ve worked in software companies for most of my career and I know that software productization isn’t free. Or even close to free. It grows to become a significant chunk of the total company spend. In R&D development and debug. DevOps. Online support. Applications support. In fairness to the RISC-V product suppliers, they put a lot of work into building their toolchain solutions. I’m sure they would object strenuously to any suggestion that their SDKs are anything less that fully robust.

Maybe there’s room for both

And yet I think back to my software company experience and wonder how they are going to scale that part of their business every bit as effectively as the hardware parts of the business.  Especially if they plan to offer the software for free, or close to free. As far forward as I can see, mixing software and hardware in one company is going to remain a challenge. It can be done but it’s not easy. It’s good to have alternatives in the ecosystem. Such as the Mentor RISC-V SDK options.

You can read the white paper HERE.


CEO Interview: Ted Tewksbury of Eta Compute

CEO Interview: Ted Tewksbury of Eta Compute
by Daniel Nenni on 08-11-2020 at 10:00 am

Ted Tewksbury President CEO Eta Compute

Tell me about Eta Compute’s vision? 
We envision a world where intelligent devices at the network edge make everyones’ lives safer, healthier, more comfortable, and convenient without sacrificing privacy and security.

How do you hope to achieve this? 
We achieve this by providing the lowest power and most energy efficient machine learning platform to enable intelligent sensing anywhere.

Why is low power so important in this goal? 
Demand for always-on sensors is growing exponentially, driven by the Internet of Things (IoT), wearables, hearables, smart cities, buildings, industry 4.0, and many other applications. Artificial Intelligence (AI) is needed to transform raw sensor data into actionable information. Today, this is done primarily in the cloud. The challenge for many sensor nodes is that they’re powered by small batteries or energy harvesting and have to operate for years without maintenance. Wireless transmission of data is enormously power hungry, so cloud AI isn’t an option.

The only viable solution is to move inferencing to the sensor endpoint itself. Not only does this reduce power by several orders of magnitude but it also provides other benefits such as reduced latency for real time applications, continuity of operation in case of a connectivity disruption and improved security by keeping sensitive data on premises. There’s just one problem – this architecture needs an inferencing engine running at well under a milliwatt, which hasn’t existed until now.

What is the TENSAI® Platform?
TENSAI® is a complete endpoint-to-cloud AI platform that achieves the lowest power and energy per inference in the industry. It removes energy capacity as a constraint for unwired intelligent sensor nodes.

TENSAI consists of three basic components:
First, is the ECM3532 Neural Sensor Processor. The ECM3532 uses a unique, patented dual-core architecture to reduce power consumption to the 100 microwatt range. This enables 100 times lower energy per inference or, equivalently, 100 times longer battery life than the competition. The ECM3532 interfaces with any combination of sensors and runs a variety of neural nets, enabling it to address a wide variety of applications, including voice, sound, image, motion, biometrics, and others.

Second, our TENSAI compiler translates and compresses neural nets from TensorFlow to firmware optimized for the ECM3532. Details of the hardware are completely hidden from the developer and no embedded programming is required. TENSAI is 10x more energy efficient than competing compilers.

The third element of TENSAI is our recently announced AI sensor board, which integrates the ECM3532 with a variety of sensors to facilitate the development of new applications.

The platform is fully integrated with Azure IoT cloud for provisioning, training, data analytics, and continuous model improvements. TENSAI makes it easy for anyone to develop new edge applications with the lowest energy per inference and fastest time to market.

Where will your technology “touch” the consumer and how will it improve their lives?
You’ll find TENSAI wherever you have always-on sensors that need to make autonomous decisions at very low power without wired power and without continuously communicating with the cloud. Use cases are as vast as your imagination. Think about retrofitting imaging and security in remote locations, outdoors, or in existing buildings without having to rewire electricity. Imagine tiny battery-operated imaging devices that can monitor shelves in a store or warehouse, count people in a conference room, or measure social distancing to ensure that CDC guidelines are being met. Consider touchless user interfaces to control battery-operated devices with gestures, voice, or simply by your presence. TENSAI-enabled devices can monitor the location, activity, and health of people, assets, livestock, or crops anywhere, without wireline power. These are just a few use cases that make life safer, healthier, and more convenient and which wouldn’t be possible without ultra-energy efficient endpoint AI enabled by TENSAI.

How is the TENSAI Platform better than alternatives?
TENSAI is a complete end-to-end development platform providing the optimal combination of energy efficiency, accuracy, and ease-of-use with the flexibility to support a wide range of applications including voice, sound, vision, motion, biometrics, and environmental sensing. TENSAI delivers the highest energy efficiency and the lowest energy per inference of any other alternative. Only TENSAI can enable 2-10 years of battery life with zero maintenance as required by many IoT applications. While there is a great deal of hype and vaporware in the edge AI segment, TENSAI is real. It’s available now and ready to ship in volume.

How has the market reception to your solutions been – when will you announce customers or products using your technology?
We have received extremely enthusiastic responses from the market and are gaining increasing traction with marquee partners and customers across a wide range of use cases. We are currently in the design-in phase, so we can’t disclose customer names at this time, but we expect to make some announcements by the end of this year.

What are your challenges to adoption?
The two primary technical impediments to the adoption of TinyML were energy efficiency and ease of use, both of which we have now solved with TENSAI. As with any new technology, a robust ecosystem is vital to catalyzing adoption. Our growth is limited only by the number of developers and how quickly they can deploy new applications on our processor. That’s why we developed the TENSAI compiler to facilitate and accelerate customers’ development cycles. We are partnering with other industry leaders to build a robust TinyML ecosystem and enable new use cases. For example, TENSAI is now integrated into Edge Impulse’s on-line TinyML platform, making it possible for their growing community of open-source developers to quickly create new applications for the ECM3532 without writing a single line of code.

What’s next for Eta Compute?
Our first product, the ECM3532, is targeted at the IoT and other energy constrained applications. The proprietary power management technology that enables our breakthrough energy efficiency is scalable to higher performance levels and smaller process geometries. Our next generation processor will address higher performance applications while achieving the same 100x energy efficiency advantage over competitors. In addition, we’re responding to strong customer demand for our AI sensor board by developing higher levels of system integration for specific vertical markets.

If you had 60 seconds to make a pitch to a VC, what would you say?
Imagine a peel-n-stick sensor module the size of a deck of cards, containing a tiny camera, microphones, accelerometer, gyroscope, and pluggable sensors for gas, pressure, and other functions. The module operates using solar energy or it can run for more than two years on a couple of AA batteries. The device has a socket for a radio of your choice – LoraWan, NB-IoT, BlueTooth, WiFi, ZigBee, SigFox, WiSun, or others. It automatically connects to the cloud, provisions itself and appears on your on-line dashboard right out of the box. You can view all your modules on your laptop or smartphone, change settings, analyze data, and update your models to improve accuracy.  Placed on a wall or ceiling, the device can count people, detect intruders, or monitor social distance. Mounted on a shelf in a store or warehouse, it can notify management when a product is out of place or the shelf needs restocking.  Stick it on a machine, engine, or appliance and it can tell you when maintenance is required or a part has to be replaced. All of these applications can be accomplished using the same hardware simply by downloading the appropriate neural net from an on-line library.  Or you can create a custom application on-line at the push of a button. This isn’t the future. This is now. This is just a fraction of what can be done with TENSAI.

Also Read:

CEO Interview: Ljubisa Bajic of Tenstorrent

CEO Interview: Anupam Bakshi of Agnisys

Interview with Altair CTO Sam Mahalingam


Should India Invest in Semiconductor Manufacturing?

Should India Invest in Semiconductor Manufacturing?
by Terry Daly on 08-11-2020 at 6:00 am

Semiconductors Made in India

Cyber security, economic growth, government leadership and industry support must be weighed

The debate
A healthy public debate is underway as to whether India should invest in semiconductor manufacturing. PVG Menon, former chair of the India Electronics and Semiconductor Association, supports the objective as part of a full “chip” ecosystem yet reflects deep skepticism as to its achievability. Parag Naik, Co-Founder and CEO of Saankhya Labs, argues that India should prioritize investment in “fabless” design firms that generate chip demand, control intellectual property rights (IPR) and direct product sourcing. In June, Ministry of Electronics & Information Technology (MeitY) published a program that sets forth renewed financial incentives targeting electronics manufacturing, including semiconductors. Minister of Commerce and Industry Piyush Goyal recently asked Indian industry to take leadership in setting up a semiconductor factory (fab) in India.

The background
Despite having extensive top talent, India’s chip sector remains essentially a “design services job shop.” Global multinationals leverage Indian expertise at low cost while capturing the IPR and attendant profits.  There are no commercial fabs in the country.  India has posted incentive packages for chip manufacturing for a decade, but alone they have been insufficient to motivate the industry to invest. India is not playing to win and is on the sidelines at a critical juncture. The geopolitical battle between China and the U.S. offers an opportunity for India to become a strategic hedge for the industry.

GoI has historically fixated on “building a fab”, targeting the foundry model. Building a fab is arguably the easy part. Loading the fab consistently with customer demand, developing well-enabled technology platforms, delivering operational efficiency and excellent customer support, and generating return on investment are key challenges. A very capable global foundry industry, led by TSMC, has evolved over the past 30+ years. These firms compete fiercely and provide world class technology and customer service. Customer loyalty is high and switching product sourcing to a new India fab would represent a high risk. Most customers would be unwilling to source in India unless the cost of NOT doing so is higher.

Another key challenge is attracting both private investment and the active participation of the global semiconductor industry.  This requires the demonstration of a viable and sustainable strategy. Competitors have largely depreciated fabs and a track record of relentless cost management. The low variable cost and near zero depreciation enables aggressive pricing by incumbents. A multi-billion-dollar investment in India will be at a long-term competitive cost and profit disadvantage unless supported by aggressive subsidies. GoI plays a key role. A national strategy is sorely needed.

Why would India want to invest in a chip manufacturing sector?
India is at a critical crossroad. Its virtual 100% reliance on the global industry for electronics products carries adverse strategic implications. The country’s national security and core national infrastructure are increasingly at high risk of cyber threats by geopolitical foes, notably China, which controls 50% of global electronics manufacturing. Having not previously invested to control critical segments of the electronics industry value chain, India is behind.

Dr. G. Satheesh Reddy, head of India’s Defence Research and Development Organisation, recognizes the “dependency threat” albeit at a higher platform level. He stated in November 2019 that his agency “… has set a target to achieve self-reliance in missiles, radars, sonars, torpedoes, armaments and EW (early warning) systems.  We intend to have no import for these systems in five years.” A laudable and challenging goal. Did the scope include the electronics sub-systems of these platforms? Chips?

Despite India’s recent economic progress, more is needed to accelerate the pace of growth and meet its vast social needs. India is under-achieving its potential for higher rates of GDP growth by deferring to global players for innovation and supply chain control in electronics, including chips. India’s burgeoning indigenous electronics and chip demand, powered by 1.35 billion people and a strongly emerging middle class, if properly positioned, can incentivize global foundry players and fabless companies to invest and position themselves to capture this demand. But GoI policy must be a forcing function to translate indigenous “consumption demand” into one where products are designed and manufactured in India.

How might India proceed?
Clarity of objective is central. Global market leadership or matching China’s massive subsidies are not necessary. One approach would be to focus the strategy on self-sufficiency and security of supply for the key electronics platforms and chips powering national security (defense, intelligence, cyber, space) and critical infrastructure (energy grids, communications, media, finance and banking networks). Commercial innovation and GDP growth will follow.

Five strategic pillars could support implementation:

  1. Create a “Trusted Electronics Value Chain” policy to foster innovative global and India-based security protocols and require that all key electronics assemblies and chips in India’s national security and critical infrastructure applications comply with these protocols. The objective is to drive design and ultimately manufacturing of electronics and chip content into India, leveraging preferential market access and the national security exemption provisions of international trade agreements.
  2. Transform India’s chip design services sector into a global Fabless product industry through public-private investment in companies targeting security applications and the emerging technologies of the cognitive era (e.g., Artificial Intelligence, Blockchain, Machine Learning, Virtual Reality, IoT, Big Data and Quantum Computing). Investments in companies that can commercialize leading India-based research in semiconductor product development, such as Shakti from IIT-Madras, should also be prioritized.
  3. Create a world-class technology park that replicates Taiwan’s model. This would be a joint investment by GoI and a regional entity providing capital and operating subsidies for land, infrastructure (roads, water, power, communications), and specialty gas and chemical storage and distribution.
  4. Attract and incentivize a leading global memory manufacturer to build their next fab in the park, acting as the “anchor” tenant; the industry ecosystem of suppliers would quickly follow. Government relationships with South Korea (SK Hynix, Samsung), Japan (Kioxia) and United States (WD, Micron) should be fully leveraged to integrate memory manufacturing into national strategic alliances.
  5. Structure a public-private investment model for fabs. An Indian-led public-private investment model, built on an equity-based technology and operations partnership with TSMC, would quickly legitimize India. TSMC would further monetize its technology and gain geographic diversification to mitigate increasing tensions between Taiwan and the mainland. Recent discussions with US and Japanese entities signal TSMC’s openness to operations outside Taiwan, rare for the industry leader. Global fabless firms could participate through an equity-based take-or-pay capacity obligation. Doing so in a TSMC-partner fab could de-risk customer sourcing concerns, align fabless product lines with required global or Indian security protocols, and confer preferential market access for India’s chip demand.

A competitive scale 300mm “Trusted” fab producing analog and logic devices in the 65-to-22nm nodes would plumb the model. A 200mm “Trusted” fab featuring the 180/130nm nodes would complement and incorporate key “III/V” technologies such as gallium arsenide and gallium nitride (GaN), providing a commercialization path for world class GaN research being done in the CeNSE at IISc-Bangalore. Future technologies such as silicon carbide could be integrated when viable. For the foreseeable future, leading edge logic processes (e.g., 16/14nm and below) should be sourced from foundries in allied countries.

Critical success factor: Leadership
There is a high degree of skepticism on the part of the global semiconductor industry that India has the seriousness of purpose, political will or staying power for chip manufacturing. India must convince industry on these points to attract investment. Proof points would include improved infrastructure, a world-class technology park, resolution of “ease of doing business” commercial issues, and much more compelling subsidies. Indian-born semiconductor leadership talent is abundant, but with many offshore. A viable and compelling strategy will motivate these leaders to marshal their networks in support.

PM Modi has recognized the critical role of the digital economy and that GDP growth is driven by technology, R&D, and innovation.  His Digital India, Smart Cities, Make in India, and startupindia initiatives reflect a vision that the digital economy can be transformative in improving the productivity and quality of life in India. The strategy to attract electronics assemblers Foxconn, Wistron, Pegatron and Samsung into India and demonstrate an “at-scale” electronics manufacturing sector is a good start.

If India were ever to invest in chip manufacturing, now is the time. A skeptical global semiconductor industry would welcome the supply chain diversification to mitigate increasing geopolitical risks in China, Taiwan, and Southeast Asia. The pace of innovation in emerging technologies is accelerating. A talented Indian semiconductor industry is hoping for leadership from Delhi. India’s national security, the future cyber-security of its critical infrastructure, and the pace of economic growth and prosperity hang in the balance.

Terry Daly is a semiconductor industry veteran and former senior vice president with GLOBALFOUNDRIES. He is currently a senior fellow at The Fletcher School of Law & Diplomacy. This article is an excerpt from his keynote address delivered to the Confederation of Indian Industry in November 2019, New Delhi.


New Processor Helps Move Inference to the Edge

New Processor Helps Move Inference to the Edge
by Tom Simon on 08-10-2020 at 10:00 am

MIPI IP from Mixel

Many of the most compelling applications for Artificial Intelligence (AI) and Machine Learning (ML) are found on mobile devices and when looking at the market size in that arena, it is clear that this is an attractive segment. Because of this, we can expect to see many consumer devices having low power requirements at the edge with the need to perform inference tasks.  The initial approach for this was to use the cloud for these operations; however, there are many compelling reasons to move inference to the edge, and one big obstacle. The general-purpose computer chips traditionally used in mobile and edge devices cannot perform inference fast enough with low enough power consumption. A result of this unmet need has been a dramatic increase in projects developing low power edge-based AI processing units.

Among these is an impressive offering from Perceive, a recently debuted stealth start-up under the auspices of Xperi. Their Ergo edge inference processor is targeted at consumer devices such as security cameras, smart appliances, and mobile phones. According to their CEO, Steve Teig, “Perceive has developed novel, mathematically rigorous methods for inference that redefine what is possible in an edge device. Our Ergo chip delivers data center-class accuracy and performance in consumer devices, protecting privacy and security while running at ultra-low power.”

Ergo will be able to run sophisticated networks such as YOLOv3, M2Det, and others. Perceive states that Ergo can run Yolov3 at up to 246 frames per second (batch size = 1). At 30 frames per second it consumes about 20 mW. No external RAM is needed, and it fits into a small 7x7mm package. Its efficiency is an impressive 55 TOPS/Watt. The chip is fabricated on GLOBALFOUNDRIES 22FDX process and has just been announced with first time silicon success.

One piece of critical IP for this chip is the MIPI interfaces, which come from Mixel. Because image processing is a part of many AI/ML systems, MIPI is seeing expanding interest. MIPI offers power efficient and high-speed transfers of video information that is ideally suited for AI/ML applications at the edge. Mixel provided Perceive with three different MIPI IP solutions; a four-lane MIPI D-PHYSM CSI-2SM TX IP and both a two-lane and four-lane MIPI D-PHY CSI-2 RX IP on GLOBALFOUNDRIES 22FDX platform. There are two parts to the Mixel solution: the MIPI D-PHY that they developed themselves, and the CSI-2 Peripheral Controller core, which was developed by Rambus in the team that they acquired from Northwest Logic.

What is interesting is that Perceive went with Mixel for their D-PHY even though they did not yet have silicon on the 22FDX process. Jim Hall, Vice President of Hardware at Perceive, said that the existing silicon support at GF and other foundries gave them enough confidence to commit to Mixel for this design. Mixel worked hard with Perceive to ensure that compliance testing and characterization after integration went smoothly.

Mixel IP has been silicon-proven at nine different nodes and eight different foundries with more processes under active development, giving Mixel wide coverage. The announcement of the Perceive Ergo chip using Mixel D-PHY can be found on the Mixel website.

Also Read:

Mixel Makes Major Move on MIPI D-PHY v2.5

MIPI gaining traction in vehicle ADAS and ADS

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices