RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Design Considerations for 3DICs

Design Considerations for 3DICs
by Tom Dillinger on 12-14-2020 at 6:00 am

LVS flow phases

The introduction of heterogeneous 3DIC packaging technology offers the opportunity for significant increases in circuit density and performance, with corresponding reductions in package footprint.  Yet, the implementation of a complex 3DIC product requires a considerable investment in methodology development for all facets of the design:

  • system architecture partitioning (among die)
  • I/O assignments for all die, both for signals and the power distribution network (PDN)
  • die floorplanning, driven by the I/O assignments
  • probe card design (with potential reuse between individual die and 3DIC assembly)
  • critical timing path analysis, assessing the tradeoffs between timing paths on-die versus the implementation of vertical paths between stacked die
  • IR drop analysis, a key facet of 3DIC planning due to the power delivery to stacked die using through-silicon or through-dielectric vias
  • a DFT architecture, suitable for 3DIC testing using individual known good die (KGD)
  • reliability analysis of the composite multi-die thermal package model
  • LVS physical verification of the multi-die connectivity model

Whereas 2.5D IC packaging technology has pursued “chiplet-based” die functionality (and potential electrical interface connectivity standards), the complexity of 3DIC implementations requires early and extensive investment in the design and analysis flows listed above – a higher risk than 2.5D IC implementations, for sure, but with a potentially greater reward.

At the recent IEDM 2020 conference, TSMC presented an enlightening paper describing their recent efforts to tackle these 3DIC implementation tradeoffs, using a very interesting testchip implementation.  This article summarizes the highlights of their presentation. [1]

SoIC Packaging Technology

Prior to IEDM, TSMC presented their 3DIC package offering in detail at their Technology Symposium – known as “System on Integrated Chip”, or SoIC  (link).

A (low-temperature) die-to-die bonding technology provides the electrical connectivity and physical attach between die.  The figure below depicts available die attach options – i.e., face-to-face, face-to-back, and a complex combination including side-to-side assembly potentially integrating other die stacks.

For the face-to-face orientation, the backside of the top die receives the signal and PDN redistribution layers.  Alternatively, a third die on the top of the SoIC assembly may be used to implement the signal and PDN redistribution layers to package bumps – a design testcase from TSMC using the triple-stack will be described shortly.

A through-silicon via (TSV) in die #2 provides electrical connectivity for signals and power to die #1.  A through-dielectric via (TDV) is used for connectivity between the package and die #1 in the volumetric region outside of the smaller die #2.

Planning of the power delivery to the SoIC die requires consideration of several factors:

  • estimated power of each die (especially where die #1 is a high-performance, high-power processing unit)
  • TSV/TDV current density limits
  • distinct power domains associated with each die

The figure below highlights the design option of “number of TSVs per power/ground bump”.  To reduce IR drop and observe current density limits through a TSV, an array of TSVs may be appropriate – as an example, up to 8 TSVs are shown in the figure.  (Examples from both FF and SS corners are shown.)

The tradeoff of using multiple, arrayed TSVs is the impact on interconnect density.

As an illustration, TSMC pursued a unique SoIC implementation – a quad-core ARM A72 processor (die #1) where the L2$ cache arrays commonly integrated with each core have been re-allocated to die #2.  The CPU die in process node N5 maintains an L3$ array, while the SRAM die in process node N7 contains the full set of L2$ arrays.  A third die on top of die #2 provides the redistribution layers.  A total of 2700 connections are present between CPU die #1 and the L2$ arrays in die #2.

This is an example of how SoIC technology could have a major impact on system architectures, where a (large) cache memory is connected vertically to a core, rather than integrated laterally on a monolithic die.

PDN Planning

A key effort in the development of an SoIC is the concurrent engineering related to the assignment of bump, pad, and TSV/TDV locations throughout, for both signals and the PDN.

The figures above highlight the series of planning steps to develop the TSV configuration for the PDN – a face-to-face die attach configuration is used as an example.  The original “dummy” bond pads between die (for mechanical stability) are replaced with the signal and PDN TDV and TSV arrays.  (TSMC also pursued the goal of re-using the probe card, between die #1 testing and the final SoIC testing – that goal influenced the assignment of pad and TSV locations.)

The TSV implementations for the CPU die and SRAM die also need to be carefully chosen so as to meet IR goals, without adversely impacting overall die interconnect density.

LVS

Briefly, TSMC also highlighted the (multi-phase) LVS connectivity verification methodology, and unique DFT architecture selected for this SoIC test vehicle, as depicted below.

DFT

Another major consideration is the DFT architecture for the SoIC, and how connectivity testing will be accomplished using cross-die scan, as illustrated below.

 

TSMC demonstrated that the resulting (N5 + N7) SoIC design achieved a 15% performance gain (with suitable L2$ and L3$ hit rate and latency assumptions), leveraging a significant reduction in point-to-point distance afforded by the vertical connectivity between die.  The package areal footprint for the SoIC is reduced by ~50% from a monolithic 2D implementation.

3D SoIC packaging technology will offer system architects with unique opportunities to pursue design partitioning across vertical die configurations.  The density and electrical characteristics of the vertical bond connections may offer improved performance over lateral (monolithic or 2.5D chiplet-based) interconnects.  (The additional power dissipation of “lite I/O” driver and receiver cells between die versus on-chip signal buffering is typically small.)

The tradeoff is the investment required to develop the SoIC die floorplans for TSV and TDV vias to provide the requisite signal count and low IR drop PDN.  Although 2.5D chiplet-based package offerings have been aggressively adopted, the performance and footprint advantages of a 3DIC are rather compelling.  The TSMC test vehicle demonstrated at IEDM will no doubt generate considerable interest.

-chipguy

References

[1]  Cheng, Y.-K., et al., “Next-Generation Design and Technology Co-optimization (DTCO) of System on Integrated Chip (SoIC) for Mobile and HPC Applications”, IEDM 2020.

 


5 Things You Need to Plan for System Custom Silicon

5 Things You Need to Plan for System Custom Silicon
by Raul Perez on 12-13-2020 at 10:00 am

logo4semiwiki

I used to be part of the custom silicon management team at Apple.  I’ve seen how great a challenge it is to pull off a custom silicon strategy within a one year product cycle. Apple is the perfect example of this custom silicon model since they develop the best mobile processors in the world for their products. Which also includes other supporting system custom silicon.

Recently, Apple has even dropped Intel in favor of their own M1 processor for the Mac. Tesla has made their own AI processor and dropped Nvidia. Amazon AWS is about to release their own AI chip Trainium. Google is rumored to be developing custom silicon for their next phone release. Many others, such as Facebook, are also known to be or rumored to be developing custom silicon as part of their products or services. These are the who’s who best companies in the world developing custom silicon to lead their categories, and saying NO to off-the-shelf silicon.

There are some basic steps that should always be taken to start on the path towards a successful custom silicon strategy. Here they are:

1. Decide where to integrate each type of circuit.

By ‘where’ I mean multiple things. First, I mean what semiconductor process node. One common approach is to split chip integration into one chip for analog and power circuits in a 5V process. Then have a second digital chip in a low voltage process. There are processes that in some cases can be a good performance and value compromise to integrate everything into one chip (such as some 65 to 55 nm BCD lite nodes).

Second, there is the system physical location that needs to be considered. The charger chip will want to be close to the battery and the power input. The processor will want to be close to its peripherals. Trade-offs will need to be worked out to see if integration is acceptable or not.

Third, there is routability to be considered to check that routing congestion is not an issue including all the necessary passives required for the chip(s).

Fourth, there are some types of components such as sensors that are made in very specialized technologies such as MEMS, and these are not suitable for integration into a custom chip in standard silicon processes. You may be able to get benefits with the chip co-packaging approach here.

2. Decide what makes sense to integrate and what doesn’t.

Semiconductor processes don’t provide good enough cost and density to justify swapping an off-the-shelf power cap or power inductor for an integrated version. You can easily absorb into a custom silicon chip ESD protection and other diodes, signal fets and power fets. But for the latter you need to keep in mind that some power fet technologies are superior for high power and high voltage applications and best kept as off-the-shelf components. Every design is different and requires some engineering analysis to decide what makes sense.

Most off-the-shelf components can be integrated into one or a few chips. This will provide you a BOM cost reduction and board size reduction typically in the 50% range for each. Also better reliability, better anti-counterfeit security, better fit to your PRD and more.

3. Determine what are suitable existing components that could be used as the base IP to get to your desired custom chip.

Custom silicon is usually developed in parallel to the system development, as time to market is usually key for high volume consumer electronics. Therefore, it is desirable to find chips that are available off-the-shelf and try to base your custom chip project using those as base IP. Once you have a list of off-the-shelf components that look attractive, you can contact the suppliers that make them to start a discussion about custom silicon.

4. Determine who are suitable suppliers for your project and how you will manage the project.

First, decide how comfortable you feel about the suppliers you have listed in step 3. Do they have redundant manufacturing sites? Do they have a good track record delivering shipments on time? What is their overall financial health?

Second, you need to know how to manage the silicon suppliers from concept to mass production. It’s too risky to simply sign off on a chip spec, and then wait 4,5,6+… months to get your chips back. You need to mitigate that risk with a very thorough process that will ensure continuous communication and alignment between all parties involved, implement frequent checkpoints with chip experts that work for your company reviewing your supplier’s work to ensure it’s of high quality. It’s not an acceptable mitigation to split your man power by having a ‘backup’ system designed with off-the-shelf components.

5. Determine what is your ROI for your particular situation.

Let’s explore an example: Acme electronics is shipping on average 6 million units per year. The product life is about 4 years. Their electronics BOM cost is $2. They’ve determined that with one custom silicon chip they can do everything they need for $1. They engage with a supplier that quotes them an NRE of $3 Million USD in three payments: $1 Million USD at kick off, $1.5 Million USD at tape out and $0.5 Million USD at mass production ramp. So Acme electronics needs to pay up front $3 Million USD. But they will save $1 in every unit they ship. After shipping 3 Million systems they will recover their NRE investment. After that they will earn $1 of extra profit for every system they ship. Since they will sell 24 million systems during the product’s lifetime, after deducting the $3 Million USD NRE, Acme electronics makes $21 Million USD of extra profit. So the ROI for them was equal to 21 Million USD/3 Million USD = 700%.

It’s also important to consider in this analysis the losses you may be incurring due to counterfeits, yield losses, etc… A custom silicon strategy can help you virtually eliminate counterfeit risks and losses. Therefore, that should be part of your cost benefit analysis.

 

About CustomSilicon.com by Digital Papaya Inc.

 

CustomSilicon.com is the leading consulting firm in the custom silicon strategy and project management space for AR/VR, automotive, mobile, server, crypto, sensors, security, medical, space and more.

Raul has 20 years of combined experience in the system electronics and silicon industries. He is currently responsible for major system company’s custom silicon and sensor projects. Raul was the directly responsible silicon manager for 18 chips ramped to mass production at Apple for iPhone and iPad, and 23 total chips ramped to mass production counting projects where he was an expert reviewer. Raul was directly responsible for the development of mobile processor System PMICs for the iPad2, New iPad, iPad mini, iPad 4 and iPhone 5s. Other silicon included, backlight/display power for iPhone 5 and iPhone 5s, lightning connector silicon and video buffers. He managed supplier teams across the Globe.

Our network of experts provide our clients with an A+ silicon management team from day one.


The Semiconductor Industry Has High Hopes That Biden Will Change Tracks

The Semiconductor Industry Has High Hopes That Biden Will Change Tracks
by Terry Daly on 12-13-2020 at 8:00 am

US China Semiconductor Biden

What is the “right track” for US-China trade relations?

The semiconductor industry has been squarely in the crosshairs of US-China trade tensions for four years. As the US faces a presidential leadership transition, will a Biden administration change the dynamic? The chip industry is counting on it, and China hopes so too.

In a recent address to the US-China Business Council, China’s foreign minister Wang Yi said China is open to and hoping for a renewed relationship. “We should strive to restart the dialogue, get back to the right track, and rebuild mutual trust in the next phase of Sino-US relations.”

China should not expect an immediate unwinding of the Trump agenda. In a recent New York Times interview, Biden stated that he intends to first review the existing US-China agreement and then develop a “coherent strategy” with traditional allies in Europe and Asia.  He wants trade policy that will “… actually produce progress on China’s abusive practices – that’s stealing intellectual property, dumping products, illegal subsidies to corporations” and forcing “tech transfers” from American companies to their Chinese counterparts. These goals could have been directly lifted from Trump’s US Trade Representative Section 301 Report (March 2018). Biden also wants to build leverage through bipartisan consensus for large scale investments in R&D, infrastructure, and education to compete with China. His view is that the US currently has neither the policy nor the leverage.

The Trump administration has been on a four-year campaign to redress trade imbalances, counter long-standing industry complaints regarding China’s trade practices, check China’s global cybertheft reach and deny advanced technology to its national security complex (military, intelligence, cyber and space). Trump levied tariffs, strengthened oversight of Chinese licensing and M&A activity with the signing of the Foreign Investment Risk Review Modernization Act, and expanded export controls targeting both denied parties and advanced technologies. In addition, he triggered the foreign direct product rule to restrict global companies (primarily TSMC) from product shipments to Chinese companies (primarily Huawei) using US-origin technology (notably from US EDA & IP firms and semiconductor equipment manufacturers). The Justice Department took on high profile litigation to prosecute IP theft (UMC and Fujian Jinhua). And the “Clean Networks” initiative formed alliances with more than 50 democracies dedicated to using only trusted vendors in their 5G networks.

The policy result: mixed. The trade deficit is higher today than at the outset. The US semiconductor industry reacted negatively to policies impacting market share, financial performance, and free trade, but positively to litigation addressing high profile IP theft. The semiconductor industry and many of its customers scrambled to revise global supply chains to mitigate risk. The impact to China’s technology industry was severe. Huawei was hit particularly hard by the denial of access to chips (resulting in the sale of its Honor smartphone business) and by a partial global boycott of its 5G communications systems. Huawei and SMIC are now essentially locked out of access to leading edge chip technology (7 nm and below). China retaliated with tariffs and its own denied parties list. It codified a new strategy to become self-sufficient across the entire semiconductor value stack.

The pending “de-coupling” threatens a bifurcation in global technology standards, inefficiency in R&D investment and a revival of economic nationalism. Industrial policy has (re)surfaced in the US, Europe, India and elsewhere as regions move to protect access to leading technologies, address cyber risks to national security and critical infrastructure, and secure the supply of key components. Taiwan announced an initiative to form its own semiconductor equipment industry to reduce dependence on US firms and mitigate the reach of US sanctions.

Many executives in the semiconductor industry desperately want to roll back the Trump agenda. They want unfettered access to China’s market and to global talent, but with protection of IP and freedom of action to operate globally. They want to avoid the balkanization of the industry. They acknowledge policy objectives of their countries of incorporation but want to extract the chip industry from being a lever of economic and national security policy. They do not want to be in the club long dominated by soybeans, oil, steel, airlines, and autos.

So should a Biden administration unwind, maintain, or modify policy to gain consensus and leverage? Will it acquiesce to China’s view of the “right track”? A geopolitical reality check regarding China must underpin potential policy revisions. The Biden team surely understands that there is already near bi-partisan consensus in the US Congress that China threatens global security, denies essential human rights, and disregards obligations taken under international agreements. These threat vectors will not disappear with Joe Biden in the White House.

China is a regional and global security threat with an increasingly aggressive military posture against neighbors in the South & East China Seas and on its border with India. It is prosecuting a rapid build-up of conventional and asymmetric military capability leveraged by a “civil-military fusion” policy that enables Chinese government access to any technology available in its commercial sector. It continues trade secret and IP theft through both traditional and cyber espionage. The Belt & Road Initiative and debt diplomacy through Chinese investment in overseas port facilities and raw materials personify economic strategies backing China’s goal of global hegemony. Is this being on the “right track”?

China’s election to the UN Human Rights Council belies an atrocious human rights record. It has imprisoned and forced into involuntary labor millions of Muslim Uighurs. It also persecutes Buddhist, Falun Gong and Christian communities. The Chinese Communist Party is the only acceptable orthodoxy. China abrogates obligations it has taken under international treaties. It refuses to accept the results of maritime disputes arbitrated under the UN Convention on the Law of the Sea. It unilaterally terminated its 50-year treaty on Hong Kong (one country, two systems) and imprisoned advocates for democracy. Its role in COVID-19 remains to be understood. Right track?

China masterfully leveraged access to open societies and the international trading order since joining the WTO in 2001, lifting millions of people out of poverty. But it has not met its reciprocal obligations to free, fair, and transparent trade practices. China’s economic development play book is extensive: subsidization of national champions; restrictions on foreign access to local markets; requirements on global corporations for licensing and/or minority ownership in joint ventures as the ante for market access; acquisition of global firms followed by repatriation of IP and production; cyber theft targeting commercial IP and technology critical to national security. Right track?

Finally, across the straights sits Taiwan, jurisdiction to one of the most vibrant and strategic segments of the semiconductor industry. China has taken an unambiguous position on its ultimate sovereignty over Taiwan and its aim for reunification, positions not widely supported in the international community. China is using economic and military leverage to bend Taiwanese leadership and the international community toward that view. Right track?

Any US President who subordinates this threat profile in the quest for improved trade relations with China does so at the peril of the United States and its allies. Consensus and the use of leverage are central to the path forward.

Indeed, there is US bipartisan consensus in Congress on the China threat and the need to invest heavily in both research and manufacturing to keep US chip technology at the leading edge and assure security of chip supply. This consensus is exemplified in the CHIPS Act now integrated in the pending National Defense Authorization Act (NDAA). Internationally, there is consensus among more than 50 liberal democracies as to the threat posed to trusted communications networks by Huawei’s 5G platform and an associated commitment not to deploy Huawei.

Despite the revulsion of “All Things Trump” by most leaders in technology, objective policymakers recognize this consensus and the substantial leverage bequeathed by the Trump administration on which to advance US objectives with China. How then should a Biden administration position trade policy and the semiconductor industry in this context? Should chips be exempt from use as a lever of US policy viz-a-viz China?

First, Biden should maintain all sanctions and tariffs and avoid the visceral instinct to immediately reverse the actions of the Trump administration. This would clearly signal to China that a new Biden administration shares in the US bi-partisan consensus that China is a threat to global security and that abuses of human rights and the abrogation of treaty obligations are not acceptable. For now, maintain the leverage that was painfully developed.

Next, Biden should task Katherine Tai on day one to lead the development of a “National Trade Strategy” to drive clarity of US objectives and approach on trade policy. This would guide consistency in US action and transparency for the American people, corporations, and trading partners. It should embody the high ground of “free, fair and open trade”, embrace international trade deals that expand the global economy, embody strong IP protection, provide national security carve-outs, and integrate “reciprocity and proportionality” as central tenets in countering trade treaty violations. It should support use of trade as a viable lever in achieving national policy priorities.

Third, coordinate China trade policy with liberal democratic trading partners. Those most critical from a semiconductor perspective are South Korea, Taiwan, Japan, Singapore, the EU, Israel, and India. Unilateral US action has at times disenfranchised traditional allies, but the Clean Networks alliance and the 42 nation Wassenaar Arrangement governing export control provide beachheads from which to expand. A Biden administration should evaluate conditions under which the US could join the Trans-Pacific Partnership and negotiate toward that end. It should reconcile open issues and re-engage the WTO. These actions will blunt China’s ability to further displace US global trade leadership following China’s win in finalizing the Regional Comprehensive Economic Partnership.

Fourth, unambiguously confirm support for the CHIPS Act as incorporated in the pending NDAA, or any revision needed in 2021. Extend the CHIPS Act to include multi-year funding for the comprehensive R&D imperatives in the “Decadal Plan for Semiconductors”, as recently published by the Semiconductor Research Corporation.

Finally, re-engage in trade negotiations with China with clear objectives and allied support. Establish as part of the talks a technical working group inclusive of US and Chinese entities of the SIA, SEMI and GSA. Charter the group to deliver recommendations for specific technical and governance methods of protecting IP and ensuring that the application of US-sourced technology be limited to commercial use and firewalled from China’s national security infrastructure. A robust verification regime must be the ante for lifting existing tariffs and sanctions. Phase tariffs and sanctions out in concert with China’s demonstrated acceptance of its international treaty obligations.

The Thucydides Trap posits the inevitability of military conflict between a current global hegemon and a rising power. Is war then pre-ordained for the US and China? Semiconductor technology is the key ingredient of the digital economy and is essential to the future of both countries, indeed the globe. An agreement on chips between the great powers might pave the way for resolution of other critical flash points and lead minimally to détente.

Joe Biden is right to seek US bi-partisan consensus and alignment with allies as he steps back onto the global stage. He should wisely use the multiple points of leverage passed along from the prior administration and assure that the “right track” is defined by the interests of the US and its allies, not solely those of Beijing.

Terry Daly is a retired semiconductor industry executive and senior fellow at The Council on Emerging Market Enterprises, The Fletcher School of Law & Diplomacy, Tufts University


Tesla: The Eyes Have It

Tesla: The Eyes Have It
by Roger C. Lanctot on 12-13-2020 at 6:00 am

Tesla The Eyes Have It

David Zipper of Harvard’s Kennedy School writes in Slate that the incoming Biden Administration should “bring the hammer down” on Tesla Motors for its mis-labeled and therefore misleading Autopilot application and the recently updated Full Self-Driving software beta in the interest of the general public. Zipper’s plan, apparently is to “stop” Tesla and somehow put Federal regulators in charge of “guiding” the electric car company in its development and deployment of self-driving technology.

Slate: “The Biden Administration Needs to Do Something about Tesla”

Zipper is correct in highlighting the limitations of Tesla’s FSD software but his hysteria is misguided. FSD – launched this past fall as a beta for customers with suitably equipped vehicles and with an array of consumer caveats – is a potential menace. But a blunt force regulatory response of the sort Zipper is advocating is hardly in order and certainly nothing the Biden Administration should sign up for – especially given the fact that Tesla has become the poster child of global American automotive technological achievement.

Nevertheless, Zipper trots out fellow travelers supporting his cause including: the National Transportation Safety Board; the National Highway Traffic Safety Association, Partners for Automated Vehicle Education (PAVE), the AAA, the Owner-Operator Independent Drivers Association (OOIDA), the Government Accountability Office, and a somewhat ambivalent Alliance for Automotive Innovation.

What’s the real problem? How did we arrive at this moment where an innovative EV startup has disrupted industry norms and traditions with a customer-pleasing driving automation solution that simultaneously promises life-saving technological advances and the potential for sudden death? Why has Tesla stirred up such passionate opposition?

We got here because A) the NHTSA ran out of passive safety regulatory solutions such as seat belts, airbags, stability control, and anti-lock braking to reduce highway fatalities; and B) the agency has been sidelined, de-emphasized and defunded at the very moment when it needs more attention and funding to take on the challenge of regulating active safety systems such as blind spot detection, lane departure warning, automatic emergency braking, cross-traffic warning, and adaptive cruise control.

The last major NHTSA safety initiative was a voluntary effort agreed to by the automotive industry to implement automatic emergency braking. Before that came the decade-long effort to mandate backup camera technology.

If it weren’t for the COVID-19 pandemic killing thousands of Americans on a daily basis, consumers might be more troubled by the 100 Americans dying every day on U.S. roadways. Tesla’s CEO Elon Musk argues that his vehicles and his technology are part of the solution, not the problem.

The solution to the Tesla FSD beta software is quite simple and Zipper touches on it but fails to focus on it. The problem is the driver monitor built into Tesla vehicles. Zipper notes that it lacks an eye-tracker, thereby allowing it to be easily subverted by reckless or incautious users.

In reality, Tesla’s vehicles are already equipped with in-cabin driver and passenger monitors that may well be capable – with an over-the-air software update – of fulfilling the need for a more robust solution. Should a monitor be required, Tesla is capable of a flip-switching response.

So, the solution appears to be simple. NHTSA ought to initiate an investigation of the efficacy of driver monitoring systems and develop a recommendation. Given the resources and time normally required by such an investigation, though, NHTSA and the public might be better served by the pursuit of the same voluntary path taken for encouraging the adoption of automatic emergency braking.

Zipper notes the advantages of Europe’s so-called “type approval” process for reviewing and approving systems to be introduced for European automobiles. He fails to mention that the separate European New Car Assessment Program likely has overriding relevance here due to the popularity of its five-star safety ratings based on rigorous and ongoing research.

Euro-NCAP will require driver monitoring as standard equipment on all new vehicles beginning with model year 2022. All indications are that this requirement – already evolving – will eventually integrate eye tracking solutions.

As noted by Strategy Analytics in a recent report on the subject: “However, members of the UNECE safety committee believe that, by 2022, the test protocols from Euro-NCAP will be tightened to include direct monitoring of the driver’s eyes and face movements – and thus could be beneficial for interior camera-based driver monitoring systems.”

Strategy Analytics: “European Mandate Boosts Interior Camera-Based Driver Monitoring, Winners Now Emerging”

In other words, nothing less than eye-tracking will be required as standard equipment on European vehicles in order for them to achieve a five-star safety rating – equivalent in the U.S. to the Insurance Institute for Highway Safety’s five-star safety rating. It’s worth noting that Consumer Reports recently gave Comma. Two’s Open Pilot aftermarket driver assistance system a top rating in part due to its integration of eye-tracking based driver monitoring.

SOURCE: Consumer Reports

Consumer Reports: “Advanced Driver Assistance Systems – Test Results and Recommendations”

General Motors was a leader in integrating eye-tracking technology from Seeing Machines as part of its Super Cruise semi-automated driving system. Super Cruise took second place behind Comma Two in the Consumer Reports ranking. Tesla was third.

The greater significance behind the entire debate is the recognition of the efficacy of human-based driving. In its own literature, Euro-NCAP blames 90% of all crashes on human frailties. The reality is that if machines were doing all the driving today our transportation systems would fail miserably. Human beings are actually pretty good at driving cars – even if more than a million humans die every year in vehicle crashes.

The 100/day fatality rate in the U.S. actually represents great progress – but U.S. regulators are aware that they have reached an impasse. Transformative advances such as the adoption of seat belts and airbags are in the rearview mirror and active safety represents terra incognita. The path forward is literally and figuratively unclear.

The first step down this path, though, likely lies through driver monitoring, to better understand driver behavior and how to assist drivers. Rather than seeking to remove human beings from the driving task, auto makers, like Tesla, are seeking out ways to assist drivers.

The Consumer Reports report on advanced driver assist systems (ADAS) highlights the challenges of developing and refining effective and appropriate user interfaces that are helpful without being distracting, confusing, or annoying. Strategy Analytics conducts user experience research in this area as well and is on record criticizing Tesla’s FSD beta software.

Strategy Analytics: “Tesla Full Self Driving HMI – Not Useful, Not Usable, Not Safe”

We will not make progress by standing in the path of innovation. Developers, like drivers, need help and, maybe, some guidance. It may be time to appoint a proper Congressionally approved director of NHTSA and properly fund this essential organization so that it can take on its greatest challenge yet – helping machines to better assist humans in the task of safe driving.

At the very moment that the industry is poised to start removing steering wheels from cars, regulators are calling for driver monitors to make sure drivers are paying attention to the driving task. Suffice it to say there will be some very confusing messages for drivers to digest in the coming years. Let’s hope we get the messaging, the branding, and the regulations right in the interest of saving lives.


HFSS Performance for “Almost Free”

HFSS Performance for “Almost Free”
by Jim DeLap on 12-11-2020 at 10:00 am

HFSS PCB

Everyday, engineers are running simulations to deliver the next generation of products to make our lives better. Everyday, they wait for those simulations to finish, wishing that they could get answers instantaneously. While waiting for those simulations or checking on the status of their runs at night, they might indulge in a diversion by checking their stocks on their smartphone or playing a few minutes of their favorite game on their console. Little do they know that faster simulations are as close as that latest version of the OS on their phone or the latest downloadable content for their game.

Whenever problems occur with our smartphones, laptops, or game consoles, we confront some technical service representative for a solution. We are often met with the first question: “what version  OS are you running?” (OK, it may be the second question after “have you tried rebooting?”) So many times, with technology we can get better results simply by upgrading to the latest version. This story is no different with many simulation products such as Ansys HFSS.

Ansys understands that time spent waiting for simulations to finish is time not spent making smart design decisions. One of our core priorities is to continually reduce the total time spent in our tools. Sometimes, this goal is realized by major re-architecture of how we solve a frequency sweep. Other times this can mean a better way to store data on disk so that it is easier to access in memory. Still others may mean new research into core computational algorithms. For some releases, these speed increases may be minor, while for others, they may be significant.

Often our users must rely on a centralized IT team to make changes to their simulation machines. Since enterprise IT organizations are notoriously conservative in their approach to updates, the end users do not have the latest Ansys software on their machines. We have even seen some customer situations where they are using 2-3-year-old versions of tools, and in today’s pace of technology advances, that can feel like a lifetime.

Over a recent 3-year development cycle, there was a 2.5X speed improvement from code and algorithm optimizations including the notable addition of S-parameter only matrix solve in frequency sweeps. Then by adopting Ansys recommend best-practice setup strategies, one customer was able to improve an internal benchmark simulation time from 96 hours to just 5 hours, a 19X speed improvement with no noticeable change in the results!

Imagine if your smartphone or your laptop were 2.5 times or 20 times faster with just simple software updates? You would jump on that opportunity! To download the latest version of Ansys Electronic Desktop, please visit support.ansys.com, and for more information on the latest best practices for HFSS simulations, reach out to your local Ansys representative. For step-by-step video instructions on how to implement these best practices, check out the videos: Using HFSS to Optimize your Complex PCB Layout, True System Design with HFSS 3D Layout, and  Using Azure Cloud to Rapidly Simulate Layout Designs in HFSS.

Also Read

The History and Significance of Power Optimization, According to Jim Hogan

The Gold Standard for Electromagnetic Analysis

Executive Interview: Vic Kulkarni of ANSYS


Configuration Environment is Make-or-Break for IC Verification

Configuration Environment is Make-or-Break for IC Verification
by Tom Simon on 12-10-2020 at 10:00 am

IC Verification Environment

All semiconductor design work today rests on the three-legged stool of Foundries, EDA Tools and Designers. Close collaboration between the three make possible the successful completion of ever more complex designs, especially those at advanced nodes. Perhaps one of the most critical intersections of all three is during physical and circuit verification. IC verification configuration involves selecting the right foundry design rules, selecting verification tool options, managing design related inputs such as libraries, design data scope and location, and managing verification tool output. To facilitate this process Mentor has developed Calibre Interactive, which includes a GUI based interface for managing the execution of the Calibre tool suite.

Mentor has written a paper that includes a high-level description of Calibre Interactive that talks about how it aids CAD engineers and designers by making verification flow set up and execution much easier and highly reproducible. The tools that are managed by Calibre Interactive include Calibre nmDRC, Calibre nmLVS, Calibre PERC and Calibre xRC/xACT.

IC Verification Environment

Mentor paper cites runsets as one of the key features of Calibre Interactive which are used to encapsulate setup data and options. They serve as templates to simplify configuration, maintenance and reproducibility. Different runsets can be created for each of the different tasks that Calibre is used for, such as LVS, extraction, DRC, etc. Also, they can account for the needs of various design flows, including analog, SOC, library development, etc. With runsets many of the complex and error prone aspects of launching a verification run can be standardized and easily utilized.

One example that is given in the paper is how, for instance, specific recipes can be created for use at the cell level that exclude checks that are only applicable at the block or top level. These might be context-based checks such as connectivity and density checks. This helps avoid copious false errors that can clutter up error reports. Calibre Interactive includes an easy to use recipe editor. Recipes can be added to runsets. Also, runsets can easily be shared, making deployment within large companies straightforward.

Calibre verification runs can be customized in a single GUI, avoiding the problem of having the parameters for each run spread out in different locations. Because all available options are shown, less time is spent looking through documentation to see what options apply for a particular PDK. CAD groups can augment Calibre Interactive with Tcl scripts. This makes it possible to only reveal secondary options if the primary option is selected. Internal and external triggers are available to control the execution of scripts. The setup of triggers is also handled through the GUI, which makes it easy to manage and understand.

The paper also lays out a vision for future features that would make it even easier to set up and manage an IC verification environment configuration. Mentor won designers over to Calibre years ago with breakthrough performance. As they have continued with leading performance and capability improvements, they have also chosen to invest in usability. Far from being a convenience feature, design results depend on the ability to consistently and efficiently apply verification tools and flows. Mentor’s Calibre Interactive is proof that they understand the need for this. The paper is available for download on the Mentor website.

 


IEDM 2020 Starts this Weekend

IEDM 2020 Starts this Weekend
by Scotten Jones on 12-10-2020 at 6:00 am

IEDM 2020 Logo

As I have discussed before, I believe that IEDM is the premier technical conference for understanding leading edge process technologies. Beginning this coming weekend, this year’s edition of IEDM will be held virtually, and I highly recommend attending.

The conference held a press briefing last Monday. The tutorial and short course registrations are already at record levels and they are still coming in. They do not know the overall conference attendance yet because based on previous virtual conferences they get a lot of registrations at the last minute but will update us after the conference.

To register for the conference go here.

The tutorials will be held Saturday the 12th and are:

  • Tutorial 1: Quantum computing technologies, Maud Vinet, Leti
  • Tutorial 2: Advanced Packaging Technologies for Heterogeneous Integration, Ravi Mahajan and Sairam Agraharam, Intel
  • Tutorial 3: Memory-Centric Computing Systems, Onur Mutlu, ETH
  • Tutorial 4: Imaging Devices and Systems for Future Society, Yusuke Oike, Sony Semiconductor Solutions
  • Tutorial 5: Innovative technology elements to enable CMOS scaling in 3nm and beyond – device architectures, parasitics and materials, Myung-Hee Na, imec
  • Tutorial 6: STT and SOT MRAM technologies and its applications from IoT to AI System, Tetsuo Endoh, Tohoku University

The short courses are roughly eight-hour long classes and will be held Sunday the 13th. The short courses for this year are:

  • Short Course 1 – Innovative trends in device technology to enable the next computing revolution. Courses Organizers are: Srabanti Chowdhury, Stanford University and Anne Vandooren, IMEC (Download Abstract/Bio)
  • Short Course 2 – Memory bound computing. Course Organizers are: Srabanti Chowdhury, Stanford University and Ian Young , Intel.

The conference will be held Monday the 14th through Friday the 18th and will see approximately 220 papers presented. The full program can be accessed here.

Each day will begin with a special event and they are:

  • Monday – Plenary Talk – Future Logic Scaling: Towards Atomic Channels and Deconstructed Chips, S. B. Samavedam, imec
  • Tuesday – Plenary Talk – Memory Technology: Innovations needed for continued technology scaling and enabling advanced computing systems (Invited), Naga Chandrasekaran, Micron
  • Wednesday – Plenary Talk – Symbiosis of Semiconductors, AI and Quantum Computing (Invited), S.W. Hwang, Samsung Advanced Institute of Technology
  • Thursday – Panel Discussion – What can electronics do to help solve grand societal challenges? Moderator: Ed Gerstner, Director of Journal Policy & Strategy, Springer Nature and Chair, Springer Nature Sustainable Development Goals Programme
  • Friday – Career Session – Tsu-Jae King Liu, Dean and Roy W. Carlson Professor of Engineering, University of California, Berkeley and Heike Riel, IBM Fellow, Head Science & Technology, Lead IBM Research Quantum Europe, IBM Research

I have personally identified dozens of papers I plan to attend.

An interesting observation here is that I attended the virtual VLSI Technology Symposium earlier this year and I found the virtual format worked well. You miss the networking opportunities of a live event, but the ability to truly absorb the material presented in the papers was in my superior to a live conference. At a live conference you are often sitting in a tightly spaced seat trying to take notes while someone rapidly goes through their slides. In a virtual conference you can pause and rewind the presentation while sitting at your desk. There is also the ability to watch the presentations later insuring you never miss a presentation because there is more than one presentation going on at a time. A virtual conference also eliminates the travel expense of an in-person conference. Personally, I will miss traveling to San Francisco for a week but as a business owner I appreciate the savings.

During the call Monday we asked the organizers how they were envisioning next year’s conference and they said they are really focused on this year’s conference, but they may look at a hybrid model for the future combing in person and virtual.

Hopefully, you can attend this key technical conference. I will blog about selected papers after the conference.

About IEDM

With a history stretching back more than 60 years, the IEEE International Electron Devices Meeting (IEDM) is the world’s pre-eminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. IEDM is the flagship conference for nanometer-scale CMOS transistor technology, advanced memory, displays, sensors, MEMS devices, novel quantum and nano-scale devices and phenomenology, optoelectronics, devices for power and energy harvesting, high-speed devices, as well as process technology and device modeling and simulation. The conference scope not only encompasses devices in silicon, compound and organic semiconductors, but also in emerging material systems. IEDM is truly an international conference, with strong representation from speakers from around the globe.


Altair Expands Its Technology Footprint with I/O Profiling from Ellexus

Altair Expands Its Technology Footprint with I/O Profiling from Ellexus
by Mike Gianfagna on 12-09-2020 at 10:00 am

Altair Expands Its Technology Footprint with IO Profiling from Ellexus

Altair is a broad-based technology company with an ambitious vision. As stated on their website: Our comprehensive, open-architecture solutions for data analytics, computer-aided engineering, and high-performance computing (HPC), enable design and optimization for high performance, innovative, and sustainable products and processes in an increasingly connected world. With a platform this broad, new additions need to be targeted and best-in-class to make a difference. That’s why a recent addition to Altair caught my attention. I wanted to explore how Altair expands its technology footprint with I/O profiling from Ellexus.

As reported on SemiWiki, Altair recently acquired Ellexus. The company is based in Cambridge, UK and its focus is I/O profiling.  About ten years old, its mission is to make every engineer an I/O expert. At first glance, one may think I/O profiling is only focused on optimization. It turns out there are many other benefits, including:

  • Debug the software environment and find performance issues
  • Detect dependencies for cloud migration
  • Protect shared file systems by finding rogue applications
  • Tune third party software deployment

What is also interesting to me is the technology pedigree of the company. Their customer list includes names like Synopsys and Microsoft Azure, among a host of others that will be familiar to the SemiWiki readership, and suggests Ellexus knows something about IC design and cloud computing. Customer quotes are not common in our world, but the Ellexus website has featured feedback from prominent semiconductor players over the years, such as Mentor and Arm. Note the main products offered by Ellexus are Mistral and Breeze.

  • Arm: “Mistral allows the infrastructure team to find and prevent bad I/O patterns and gives us a lot more information to learn from.”
  • Mentor: “Breeze gives good detailed I/O information so I only needed to make a few changes to improve runtime.”
Dr. Rosemary Francis

It’s always interesting to get a perspective on an acquisition from the inside.  I had that opportunity recently when I spoke with Dr. Rosemary Francis, founder and CEO of Ellexus. The chip design roots at Ellexus run deep. Rosemary holds a PhD in Computer Architecture from the University of Cambridge. Her research focused on network-on-chip architectures for FPGAs. After working as an IC CAD Engineer at CSR and an FPGA designer for Simba HPC and Commsonic, she founded Ellexus. Rosemary was also an advisory board member at IdeaSpace (a hub for early-stage innovation) and she is a member of the Raspberry Pi Foundation.  She is also a regular guest lecturer at Cambridge University.

I began my discussion with Rosemary by exploring her views of what the acquisition meant for Ellexus. Her response was clear and concise – worldwide reach. Ellexus has built a loyal customer base, but as a company with five sales folks the size of that customer base is limited. The industry recognition that Altair enjoys and the worldwide reach the company maintains delivers a much larger base to deploy Ellexus technology. Rosemary pointed out that Ellexus and Altair tools already run side-by-side at many customers. The opportunity to provide tighter integration and new use models will be significant.  She mentioned storage-aware scheduling as one example, there are many.

Regarding on-prem vs. cloud, Rosemary pointed out that Ellexus began before the current explosion of cloud deployment, so they have a solid understanding and support for both on-prem and cloud requirements. For on-prem environments, I/O profiling is typically focused on performance.

For cloud environments, right-sizing and ensuring you have the data required and nothing else becomes important. Based on the performance profile of the application, it’s also sometimes possible to downgrade the type of storage used without seeing a performance hit. This can save a lot of money. Optimizing costs are important on the cloud as they can skyrocket if you’re not careful. Rosemary had an interesting perspective on the difference between on-prem and cloud. She explained that for on-prem it’s about “time to science” whereas for the cloud it’s about “cost to science”.  I hadn’t heard this before, but it made a lot of sense. Ellexus can handle both.

Rosemary is now a chief scientist at Altair. She will be working on the integration of Ellexus technology into the Altair PBS Works™ product suite. You can learn more about PBS Works here. As we concluded our discussion, she outlined her short, medium and long-term goals in her new role:

  • Short-term: Ensure the Ellexus integration with Altair goes smoothly and all current and new customers have all the support they need in an uninterrupted way
  • Medium-term: Help shape the roadmap for Altair scheduling, workload and cloud infrastructure/migration products
  • Long-term: Leveraging the significant resources of Altair, bring new and disruptive technology and products to the market

One of Rosemary’s missions will be to use the success Ellexus enjoyed in the semiconductor space and replicate that in other market segments. I will watch this work with interest to see how Altair expands its technology footprint with I/O profiling from Ellexus.

Also Read

Altair HPC Virtual Summit 2020 – The Latest in Enterprise Computing

High-throughput Workloads Get a Boost from Altair

Interview with Altair CTO Sam Mahalingam


Smoother MATLAB to HLS Flow

Smoother MATLAB to HLS Flow
by Bernard Murphy on 12-09-2020 at 6:00 am

A better design path from MATLAB 1 min

It hard to imagine design of a complex signal processing or computer vision application starting somewhere other than in MATLAB. Prove out the algorithm in MATLAB, then re-model in Simulink, to move closer to hardware. First probably an architectural model, using MATLAB library functions to prove out behavior of the larger system. These function blocks (S-functions) within the model are still algorithmic and are still not directly mappable to hardware. You could then (still in MATLAB) remap this architectural model to a bit-accurate Simulink model for more accurate assessment. Moving closer still to hardware, using fixed-point data types rather than floating point for example. Giving you also a reference model against which you can compare the RTL implementation you ultimately will build.

You might use the MATLAB HDL Coder to generate RTL directly from this model. But I doubt many production designs follow this path. More likely you’ll want to convert the architectural Simulink model to C++ and from there use high-level synthesis to get to RTL. Which provides lots of options to experiment with PPA to meet your goals. However, through this flow there are multiple different levels of modeling, all manually generated, creating lots of opportunities for mistakes, and confusion over where the mistakes might lie.

A better flow

Mentor recently released a white paper on how architects and designers can streamline this flow for fewer surprises and less effort. This starts in the same place with the MATLAB algorithm and Simulink architectural model. It first removes the Simulink hardware-level model step because, in the author’s view, it’s easier to translate direct from the architectural model directly to class-based C++ than to another more detailed schematic view. The second simplification results from being careful about data typing. If this is planned ahead, you can use the same C++ code for floating-point and fixed-point types, with the flip of a conditional compile switch. These changes together reduce need for 3 manually generated models down to two.

Making it work

The white paper goes into detail on how you should approach mapping data types between the two platforms. This part requires some thought in comparing Simulink data types with potential C++ implementation data types, to ensure you can easily switch between floating and fixed point typedefs. I’m guessing this is worth a little extra effort to make the rest of the flow much easier.

Now you can generate C++ code corresponding to the architectural Simulink model, with a class definition for each hierarchical block. Here the paper suggests that the Simulink model should use hierarchy effectively to ensure easy correspondence with C++ without unnecessary code duplication. Internals defining functionality of each class will of course be a redesign – you can’t use the Simulink library functions. Anyway this is where you will ultimately want experiment with implementations in synthesis – pipelining, memory architectures and so on. To get the real benefit of switching to a synthesis flow from an effectively schematic-based flow.

Validating C++ against Simulink

Building the C++ model from the Simulink architectural model is a manual step, so you need to validate correspondence through simulation. Catapult simplifies this by building an S-function from the C++. You can import this back into MATLAB and compare between this model and the architectural model. You can continue to use this push-button flow as you refine the implementation, regenerating the S-function as needed. You’d most likely want to do this as you experiment with quantization for example.

You can read the paper in full HERE.

Also Read:

A Fast Checking Methodology for Power/Ground Shorts

Mentor Offers Next Generation DFT with Streaming Scan Network

Mentor User2User Virtual Event 2020!


How Line Cuts Became Necessarily Separate Steps in Lithography

How Line Cuts Became Necessarily Separate Steps in Lithography
by Fred Chen on 12-08-2020 at 10:00 am

How Line Cuts Became Necessarily Separate Steps in Lithography

Pretty much all the semiconductor nodes in the last two decades have had at least one layer where the minimum pitch pushes the limitation of the state-of-the-art lithography tool, with a k1 factor < 0.5, i.e., the half-pitch is less than 0.5*wavelength/numerical aperture. A number of published reports [1-4] have touched upon the fact that for such tight pitches, the line end gaps tend to widen. The proof outlined briefly here with reference to the figure below is actually an alternative formulation to the one given in the appendix of [1].

The pitch is defined by illumination distributed about the ideal angle, with a sine of 0.5*wavelength/pitch. The numerical aperture naturally limits the sine in the perpendicular direction to sqrt(NA^2 – (0.5 wavelength/pitch)^2) or sqrt(NA^2 – (0.25 wavelength/half-pitch)^2). From the Fourier diffraction theory, related to the well-known single-slit aperture diffraction problem [5], the minimum width of the gap, correlating to this maximum perpendicular sine is 0.5*wavelength/sqrt(NA^2 – (0.25 wavelength/half-pitch)^2). This is plotted as the blue curve in the figure. k1 is defined as the size divided by wavelength/numerical aperture.

From the graph, it is noted that the gap will always exceed the half-pitch (indicated by the black dotted line in the figure), when the half-pitch is less than 0.5 wavelength/numerical aperture. Moreover, the smaller the half-pitch, the larger the minimum gap. This brings up some basic issues. First, the device density cannot improve much, as the widening gap offsets the shrinking line pitch. Additionally, for the metal interconnections, the next layer with the same line pitch cannot make the connections, as the required gap is too wide. Consequently, there is a need to use separate exposures to cut the line [1] or even stitch the perpendicular features at the same target pitch [2]. The latter has been promoted for bidirectional layouts by ASML as double dipole exposure lithography [2,6]. On the other hand, for unidirectional layouts, separate line cuts have become the norm, due to less stringent overlay requirements.

References

[1] https://semiwiki.com/lithography/285085-lithography-resolution-limits-line-end-gaps/

[2] M. Eurlings et al., Proc. SPIE 4404, 266 (2001).

[3] M. Burkhardt et al., Proc. SPIE 7274, 727404 (2009).

[4] E. van Setten et al., Proc. SPIE 9661, 96610G (2015).

[5] B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics, John Wiley & Sons, 1991, pp.128-129.

[6] S. Hsu et al., Proc. SPIE 4691, 476 (2002).

Related Lithography Posts