Bronco Webinar 800x100 1

Tesla: The Eyes Have It

Tesla: The Eyes Have It
by Roger C. Lanctot on 12-13-2020 at 6:00 am

Tesla The Eyes Have It

David Zipper of Harvard’s Kennedy School writes in Slate that the incoming Biden Administration should “bring the hammer down” on Tesla Motors for its mis-labeled and therefore misleading Autopilot application and the recently updated Full Self-Driving software beta in the interest of the general public. Zipper’s plan, apparently is to “stop” Tesla and somehow put Federal regulators in charge of “guiding” the electric car company in its development and deployment of self-driving technology.

Slate: “The Biden Administration Needs to Do Something about Tesla”

Zipper is correct in highlighting the limitations of Tesla’s FSD software but his hysteria is misguided. FSD – launched this past fall as a beta for customers with suitably equipped vehicles and with an array of consumer caveats – is a potential menace. But a blunt force regulatory response of the sort Zipper is advocating is hardly in order and certainly nothing the Biden Administration should sign up for – especially given the fact that Tesla has become the poster child of global American automotive technological achievement.

Nevertheless, Zipper trots out fellow travelers supporting his cause including: the National Transportation Safety Board; the National Highway Traffic Safety Association, Partners for Automated Vehicle Education (PAVE), the AAA, the Owner-Operator Independent Drivers Association (OOIDA), the Government Accountability Office, and a somewhat ambivalent Alliance for Automotive Innovation.

What’s the real problem? How did we arrive at this moment where an innovative EV startup has disrupted industry norms and traditions with a customer-pleasing driving automation solution that simultaneously promises life-saving technological advances and the potential for sudden death? Why has Tesla stirred up such passionate opposition?

We got here because A) the NHTSA ran out of passive safety regulatory solutions such as seat belts, airbags, stability control, and anti-lock braking to reduce highway fatalities; and B) the agency has been sidelined, de-emphasized and defunded at the very moment when it needs more attention and funding to take on the challenge of regulating active safety systems such as blind spot detection, lane departure warning, automatic emergency braking, cross-traffic warning, and adaptive cruise control.

The last major NHTSA safety initiative was a voluntary effort agreed to by the automotive industry to implement automatic emergency braking. Before that came the decade-long effort to mandate backup camera technology.

If it weren’t for the COVID-19 pandemic killing thousands of Americans on a daily basis, consumers might be more troubled by the 100 Americans dying every day on U.S. roadways. Tesla’s CEO Elon Musk argues that his vehicles and his technology are part of the solution, not the problem.

The solution to the Tesla FSD beta software is quite simple and Zipper touches on it but fails to focus on it. The problem is the driver monitor built into Tesla vehicles. Zipper notes that it lacks an eye-tracker, thereby allowing it to be easily subverted by reckless or incautious users.

In reality, Tesla’s vehicles are already equipped with in-cabin driver and passenger monitors that may well be capable – with an over-the-air software update – of fulfilling the need for a more robust solution. Should a monitor be required, Tesla is capable of a flip-switching response.

So, the solution appears to be simple. NHTSA ought to initiate an investigation of the efficacy of driver monitoring systems and develop a recommendation. Given the resources and time normally required by such an investigation, though, NHTSA and the public might be better served by the pursuit of the same voluntary path taken for encouraging the adoption of automatic emergency braking.

Zipper notes the advantages of Europe’s so-called “type approval” process for reviewing and approving systems to be introduced for European automobiles. He fails to mention that the separate European New Car Assessment Program likely has overriding relevance here due to the popularity of its five-star safety ratings based on rigorous and ongoing research.

Euro-NCAP will require driver monitoring as standard equipment on all new vehicles beginning with model year 2022. All indications are that this requirement – already evolving – will eventually integrate eye tracking solutions.

As noted by Strategy Analytics in a recent report on the subject: “However, members of the UNECE safety committee believe that, by 2022, the test protocols from Euro-NCAP will be tightened to include direct monitoring of the driver’s eyes and face movements – and thus could be beneficial for interior camera-based driver monitoring systems.”

Strategy Analytics: “European Mandate Boosts Interior Camera-Based Driver Monitoring, Winners Now Emerging”

In other words, nothing less than eye-tracking will be required as standard equipment on European vehicles in order for them to achieve a five-star safety rating – equivalent in the U.S. to the Insurance Institute for Highway Safety’s five-star safety rating. It’s worth noting that Consumer Reports recently gave Comma. Two’s Open Pilot aftermarket driver assistance system a top rating in part due to its integration of eye-tracking based driver monitoring.

SOURCE: Consumer Reports

Consumer Reports: “Advanced Driver Assistance Systems – Test Results and Recommendations”

General Motors was a leader in integrating eye-tracking technology from Seeing Machines as part of its Super Cruise semi-automated driving system. Super Cruise took second place behind Comma Two in the Consumer Reports ranking. Tesla was third.

The greater significance behind the entire debate is the recognition of the efficacy of human-based driving. In its own literature, Euro-NCAP blames 90% of all crashes on human frailties. The reality is that if machines were doing all the driving today our transportation systems would fail miserably. Human beings are actually pretty good at driving cars – even if more than a million humans die every year in vehicle crashes.

The 100/day fatality rate in the U.S. actually represents great progress – but U.S. regulators are aware that they have reached an impasse. Transformative advances such as the adoption of seat belts and airbags are in the rearview mirror and active safety represents terra incognita. The path forward is literally and figuratively unclear.

The first step down this path, though, likely lies through driver monitoring, to better understand driver behavior and how to assist drivers. Rather than seeking to remove human beings from the driving task, auto makers, like Tesla, are seeking out ways to assist drivers.

The Consumer Reports report on advanced driver assist systems (ADAS) highlights the challenges of developing and refining effective and appropriate user interfaces that are helpful without being distracting, confusing, or annoying. Strategy Analytics conducts user experience research in this area as well and is on record criticizing Tesla’s FSD beta software.

Strategy Analytics: “Tesla Full Self Driving HMI – Not Useful, Not Usable, Not Safe”

We will not make progress by standing in the path of innovation. Developers, like drivers, need help and, maybe, some guidance. It may be time to appoint a proper Congressionally approved director of NHTSA and properly fund this essential organization so that it can take on its greatest challenge yet – helping machines to better assist humans in the task of safe driving.

At the very moment that the industry is poised to start removing steering wheels from cars, regulators are calling for driver monitors to make sure drivers are paying attention to the driving task. Suffice it to say there will be some very confusing messages for drivers to digest in the coming years. Let’s hope we get the messaging, the branding, and the regulations right in the interest of saving lives.


HFSS Performance for “Almost Free”

HFSS Performance for “Almost Free”
by Jim DeLap on 12-11-2020 at 10:00 am

HFSS PCB

Everyday, engineers are running simulations to deliver the next generation of products to make our lives better. Everyday, they wait for those simulations to finish, wishing that they could get answers instantaneously. While waiting for those simulations or checking on the status of their runs at night, they might indulge in a diversion by checking their stocks on their smartphone or playing a few minutes of their favorite game on their console. Little do they know that faster simulations are as close as that latest version of the OS on their phone or the latest downloadable content for their game.

Whenever problems occur with our smartphones, laptops, or game consoles, we confront some technical service representative for a solution. We are often met with the first question: “what version  OS are you running?” (OK, it may be the second question after “have you tried rebooting?”) So many times, with technology we can get better results simply by upgrading to the latest version. This story is no different with many simulation products such as Ansys HFSS.

Ansys understands that time spent waiting for simulations to finish is time not spent making smart design decisions. One of our core priorities is to continually reduce the total time spent in our tools. Sometimes, this goal is realized by major re-architecture of how we solve a frequency sweep. Other times this can mean a better way to store data on disk so that it is easier to access in memory. Still others may mean new research into core computational algorithms. For some releases, these speed increases may be minor, while for others, they may be significant.

Often our users must rely on a centralized IT team to make changes to their simulation machines. Since enterprise IT organizations are notoriously conservative in their approach to updates, the end users do not have the latest Ansys software on their machines. We have even seen some customer situations where they are using 2-3-year-old versions of tools, and in today’s pace of technology advances, that can feel like a lifetime.

Over a recent 3-year development cycle, there was a 2.5X speed improvement from code and algorithm optimizations including the notable addition of S-parameter only matrix solve in frequency sweeps. Then by adopting Ansys recommend best-practice setup strategies, one customer was able to improve an internal benchmark simulation time from 96 hours to just 5 hours, a 19X speed improvement with no noticeable change in the results!

Imagine if your smartphone or your laptop were 2.5 times or 20 times faster with just simple software updates? You would jump on that opportunity! To download the latest version of Ansys Electronic Desktop, please visit support.ansys.com, and for more information on the latest best practices for HFSS simulations, reach out to your local Ansys representative. For step-by-step video instructions on how to implement these best practices, check out the videos: Using HFSS to Optimize your Complex PCB Layout, True System Design with HFSS 3D Layout, and  Using Azure Cloud to Rapidly Simulate Layout Designs in HFSS.

Also Read

The History and Significance of Power Optimization, According to Jim Hogan

The Gold Standard for Electromagnetic Analysis

Executive Interview: Vic Kulkarni of ANSYS


Configuration Environment is Make-or-Break for IC Verification

Configuration Environment is Make-or-Break for IC Verification
by Tom Simon on 12-10-2020 at 10:00 am

IC Verification Environment

All semiconductor design work today rests on the three-legged stool of Foundries, EDA Tools and Designers. Close collaboration between the three make possible the successful completion of ever more complex designs, especially those at advanced nodes. Perhaps one of the most critical intersections of all three is during physical and circuit verification. IC verification configuration involves selecting the right foundry design rules, selecting verification tool options, managing design related inputs such as libraries, design data scope and location, and managing verification tool output. To facilitate this process Mentor has developed Calibre Interactive, which includes a GUI based interface for managing the execution of the Calibre tool suite.

Mentor has written a paper that includes a high-level description of Calibre Interactive that talks about how it aids CAD engineers and designers by making verification flow set up and execution much easier and highly reproducible. The tools that are managed by Calibre Interactive include Calibre nmDRC, Calibre nmLVS, Calibre PERC and Calibre xRC/xACT.

IC Verification Environment

Mentor paper cites runsets as one of the key features of Calibre Interactive which are used to encapsulate setup data and options. They serve as templates to simplify configuration, maintenance and reproducibility. Different runsets can be created for each of the different tasks that Calibre is used for, such as LVS, extraction, DRC, etc. Also, they can account for the needs of various design flows, including analog, SOC, library development, etc. With runsets many of the complex and error prone aspects of launching a verification run can be standardized and easily utilized.

One example that is given in the paper is how, for instance, specific recipes can be created for use at the cell level that exclude checks that are only applicable at the block or top level. These might be context-based checks such as connectivity and density checks. This helps avoid copious false errors that can clutter up error reports. Calibre Interactive includes an easy to use recipe editor. Recipes can be added to runsets. Also, runsets can easily be shared, making deployment within large companies straightforward.

Calibre verification runs can be customized in a single GUI, avoiding the problem of having the parameters for each run spread out in different locations. Because all available options are shown, less time is spent looking through documentation to see what options apply for a particular PDK. CAD groups can augment Calibre Interactive with Tcl scripts. This makes it possible to only reveal secondary options if the primary option is selected. Internal and external triggers are available to control the execution of scripts. The setup of triggers is also handled through the GUI, which makes it easy to manage and understand.

The paper also lays out a vision for future features that would make it even easier to set up and manage an IC verification environment configuration. Mentor won designers over to Calibre years ago with breakthrough performance. As they have continued with leading performance and capability improvements, they have also chosen to invest in usability. Far from being a convenience feature, design results depend on the ability to consistently and efficiently apply verification tools and flows. Mentor’s Calibre Interactive is proof that they understand the need for this. The paper is available for download on the Mentor website.

 


IEDM 2020 Starts this Weekend

IEDM 2020 Starts this Weekend
by Scotten Jones on 12-10-2020 at 6:00 am

IEDM 2020 Logo

As I have discussed before, I believe that IEDM is the premier technical conference for understanding leading edge process technologies. Beginning this coming weekend, this year’s edition of IEDM will be held virtually, and I highly recommend attending.

The conference held a press briefing last Monday. The tutorial and short course registrations are already at record levels and they are still coming in. They do not know the overall conference attendance yet because based on previous virtual conferences they get a lot of registrations at the last minute but will update us after the conference.

To register for the conference go here.

The tutorials will be held Saturday the 12th and are:

  • Tutorial 1: Quantum computing technologies, Maud Vinet, Leti
  • Tutorial 2: Advanced Packaging Technologies for Heterogeneous Integration, Ravi Mahajan and Sairam Agraharam, Intel
  • Tutorial 3: Memory-Centric Computing Systems, Onur Mutlu, ETH
  • Tutorial 4: Imaging Devices and Systems for Future Society, Yusuke Oike, Sony Semiconductor Solutions
  • Tutorial 5: Innovative technology elements to enable CMOS scaling in 3nm and beyond – device architectures, parasitics and materials, Myung-Hee Na, imec
  • Tutorial 6: STT and SOT MRAM technologies and its applications from IoT to AI System, Tetsuo Endoh, Tohoku University

The short courses are roughly eight-hour long classes and will be held Sunday the 13th. The short courses for this year are:

  • Short Course 1 – Innovative trends in device technology to enable the next computing revolution. Courses Organizers are: Srabanti Chowdhury, Stanford University and Anne Vandooren, IMEC (Download Abstract/Bio)
  • Short Course 2 – Memory bound computing. Course Organizers are: Srabanti Chowdhury, Stanford University and Ian Young , Intel.

The conference will be held Monday the 14th through Friday the 18th and will see approximately 220 papers presented. The full program can be accessed here.

Each day will begin with a special event and they are:

  • Monday – Plenary Talk – Future Logic Scaling: Towards Atomic Channels and Deconstructed Chips, S. B. Samavedam, imec
  • Tuesday – Plenary Talk – Memory Technology: Innovations needed for continued technology scaling and enabling advanced computing systems (Invited), Naga Chandrasekaran, Micron
  • Wednesday – Plenary Talk – Symbiosis of Semiconductors, AI and Quantum Computing (Invited), S.W. Hwang, Samsung Advanced Institute of Technology
  • Thursday – Panel Discussion – What can electronics do to help solve grand societal challenges? Moderator: Ed Gerstner, Director of Journal Policy & Strategy, Springer Nature and Chair, Springer Nature Sustainable Development Goals Programme
  • Friday – Career Session – Tsu-Jae King Liu, Dean and Roy W. Carlson Professor of Engineering, University of California, Berkeley and Heike Riel, IBM Fellow, Head Science & Technology, Lead IBM Research Quantum Europe, IBM Research

I have personally identified dozens of papers I plan to attend.

An interesting observation here is that I attended the virtual VLSI Technology Symposium earlier this year and I found the virtual format worked well. You miss the networking opportunities of a live event, but the ability to truly absorb the material presented in the papers was in my superior to a live conference. At a live conference you are often sitting in a tightly spaced seat trying to take notes while someone rapidly goes through their slides. In a virtual conference you can pause and rewind the presentation while sitting at your desk. There is also the ability to watch the presentations later insuring you never miss a presentation because there is more than one presentation going on at a time. A virtual conference also eliminates the travel expense of an in-person conference. Personally, I will miss traveling to San Francisco for a week but as a business owner I appreciate the savings.

During the call Monday we asked the organizers how they were envisioning next year’s conference and they said they are really focused on this year’s conference, but they may look at a hybrid model for the future combing in person and virtual.

Hopefully, you can attend this key technical conference. I will blog about selected papers after the conference.

About IEDM

With a history stretching back more than 60 years, the IEEE International Electron Devices Meeting (IEDM) is the world’s pre-eminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. IEDM is the flagship conference for nanometer-scale CMOS transistor technology, advanced memory, displays, sensors, MEMS devices, novel quantum and nano-scale devices and phenomenology, optoelectronics, devices for power and energy harvesting, high-speed devices, as well as process technology and device modeling and simulation. The conference scope not only encompasses devices in silicon, compound and organic semiconductors, but also in emerging material systems. IEDM is truly an international conference, with strong representation from speakers from around the globe.


Altair Expands Its Technology Footprint with I/O Profiling from Ellexus

Altair Expands Its Technology Footprint with I/O Profiling from Ellexus
by Mike Gianfagna on 12-09-2020 at 10:00 am

Altair Expands Its Technology Footprint with IO Profiling from Ellexus

Altair is a broad-based technology company with an ambitious vision. As stated on their website: Our comprehensive, open-architecture solutions for data analytics, computer-aided engineering, and high-performance computing (HPC), enable design and optimization for high performance, innovative, and sustainable products and processes in an increasingly connected world. With a platform this broad, new additions need to be targeted and best-in-class to make a difference. That’s why a recent addition to Altair caught my attention. I wanted to explore how Altair expands its technology footprint with I/O profiling from Ellexus.

As reported on SemiWiki, Altair recently acquired Ellexus. The company is based in Cambridge, UK and its focus is I/O profiling.  About ten years old, its mission is to make every engineer an I/O expert. At first glance, one may think I/O profiling is only focused on optimization. It turns out there are many other benefits, including:

  • Debug the software environment and find performance issues
  • Detect dependencies for cloud migration
  • Protect shared file systems by finding rogue applications
  • Tune third party software deployment

What is also interesting to me is the technology pedigree of the company. Their customer list includes names like Synopsys and Microsoft Azure, among a host of others that will be familiar to the SemiWiki readership, and suggests Ellexus knows something about IC design and cloud computing. Customer quotes are not common in our world, but the Ellexus website has featured feedback from prominent semiconductor players over the years, such as Mentor and Arm. Note the main products offered by Ellexus are Mistral and Breeze.

  • Arm: “Mistral allows the infrastructure team to find and prevent bad I/O patterns and gives us a lot more information to learn from.”
  • Mentor: “Breeze gives good detailed I/O information so I only needed to make a few changes to improve runtime.”
Dr. Rosemary Francis

It’s always interesting to get a perspective on an acquisition from the inside.  I had that opportunity recently when I spoke with Dr. Rosemary Francis, founder and CEO of Ellexus. The chip design roots at Ellexus run deep. Rosemary holds a PhD in Computer Architecture from the University of Cambridge. Her research focused on network-on-chip architectures for FPGAs. After working as an IC CAD Engineer at CSR and an FPGA designer for Simba HPC and Commsonic, she founded Ellexus. Rosemary was also an advisory board member at IdeaSpace (a hub for early-stage innovation) and she is a member of the Raspberry Pi Foundation.  She is also a regular guest lecturer at Cambridge University.

I began my discussion with Rosemary by exploring her views of what the acquisition meant for Ellexus. Her response was clear and concise – worldwide reach. Ellexus has built a loyal customer base, but as a company with five sales folks the size of that customer base is limited. The industry recognition that Altair enjoys and the worldwide reach the company maintains delivers a much larger base to deploy Ellexus technology. Rosemary pointed out that Ellexus and Altair tools already run side-by-side at many customers. The opportunity to provide tighter integration and new use models will be significant.  She mentioned storage-aware scheduling as one example, there are many.

Regarding on-prem vs. cloud, Rosemary pointed out that Ellexus began before the current explosion of cloud deployment, so they have a solid understanding and support for both on-prem and cloud requirements. For on-prem environments, I/O profiling is typically focused on performance.

For cloud environments, right-sizing and ensuring you have the data required and nothing else becomes important. Based on the performance profile of the application, it’s also sometimes possible to downgrade the type of storage used without seeing a performance hit. This can save a lot of money. Optimizing costs are important on the cloud as they can skyrocket if you’re not careful. Rosemary had an interesting perspective on the difference between on-prem and cloud. She explained that for on-prem it’s about “time to science” whereas for the cloud it’s about “cost to science”.  I hadn’t heard this before, but it made a lot of sense. Ellexus can handle both.

Rosemary is now a chief scientist at Altair. She will be working on the integration of Ellexus technology into the Altair PBS Works™ product suite. You can learn more about PBS Works here. As we concluded our discussion, she outlined her short, medium and long-term goals in her new role:

  • Short-term: Ensure the Ellexus integration with Altair goes smoothly and all current and new customers have all the support they need in an uninterrupted way
  • Medium-term: Help shape the roadmap for Altair scheduling, workload and cloud infrastructure/migration products
  • Long-term: Leveraging the significant resources of Altair, bring new and disruptive technology and products to the market

One of Rosemary’s missions will be to use the success Ellexus enjoyed in the semiconductor space and replicate that in other market segments. I will watch this work with interest to see how Altair expands its technology footprint with I/O profiling from Ellexus.

Also Read

Altair HPC Virtual Summit 2020 – The Latest in Enterprise Computing

High-throughput Workloads Get a Boost from Altair

Interview with Altair CTO Sam Mahalingam


Smoother MATLAB to HLS Flow

Smoother MATLAB to HLS Flow
by Bernard Murphy on 12-09-2020 at 6:00 am

A better design path from MATLAB 1 min

It hard to imagine design of a complex signal processing or computer vision application starting somewhere other than in MATLAB. Prove out the algorithm in MATLAB, then re-model in Simulink, to move closer to hardware. First probably an architectural model, using MATLAB library functions to prove out behavior of the larger system. These function blocks (S-functions) within the model are still algorithmic and are still not directly mappable to hardware. You could then (still in MATLAB) remap this architectural model to a bit-accurate Simulink model for more accurate assessment. Moving closer still to hardware, using fixed-point data types rather than floating point for example. Giving you also a reference model against which you can compare the RTL implementation you ultimately will build.

You might use the MATLAB HDL Coder to generate RTL directly from this model. But I doubt many production designs follow this path. More likely you’ll want to convert the architectural Simulink model to C++ and from there use high-level synthesis to get to RTL. Which provides lots of options to experiment with PPA to meet your goals. However, through this flow there are multiple different levels of modeling, all manually generated, creating lots of opportunities for mistakes, and confusion over where the mistakes might lie.

A better flow

Mentor recently released a white paper on how architects and designers can streamline this flow for fewer surprises and less effort. This starts in the same place with the MATLAB algorithm and Simulink architectural model. It first removes the Simulink hardware-level model step because, in the author’s view, it’s easier to translate direct from the architectural model directly to class-based C++ than to another more detailed schematic view. The second simplification results from being careful about data typing. If this is planned ahead, you can use the same C++ code for floating-point and fixed-point types, with the flip of a conditional compile switch. These changes together reduce need for 3 manually generated models down to two.

Making it work

The white paper goes into detail on how you should approach mapping data types between the two platforms. This part requires some thought in comparing Simulink data types with potential C++ implementation data types, to ensure you can easily switch between floating and fixed point typedefs. I’m guessing this is worth a little extra effort to make the rest of the flow much easier.

Now you can generate C++ code corresponding to the architectural Simulink model, with a class definition for each hierarchical block. Here the paper suggests that the Simulink model should use hierarchy effectively to ensure easy correspondence with C++ without unnecessary code duplication. Internals defining functionality of each class will of course be a redesign – you can’t use the Simulink library functions. Anyway this is where you will ultimately want experiment with implementations in synthesis – pipelining, memory architectures and so on. To get the real benefit of switching to a synthesis flow from an effectively schematic-based flow.

Validating C++ against Simulink

Building the C++ model from the Simulink architectural model is a manual step, so you need to validate correspondence through simulation. Catapult simplifies this by building an S-function from the C++. You can import this back into MATLAB and compare between this model and the architectural model. You can continue to use this push-button flow as you refine the implementation, regenerating the S-function as needed. You’d most likely want to do this as you experiment with quantization for example.

You can read the paper in full HERE.

Also Read:

A Fast Checking Methodology for Power/Ground Shorts

Mentor Offers Next Generation DFT with Streaming Scan Network

Mentor User2User Virtual Event 2020!


How Line Cuts Became Necessarily Separate Steps in Lithography

How Line Cuts Became Necessarily Separate Steps in Lithography
by Fred Chen on 12-08-2020 at 10:00 am

How Line Cuts Became Necessarily Separate Steps in Lithography

Pretty much all the semiconductor nodes in the last two decades have had at least one layer where the minimum pitch pushes the limitation of the state-of-the-art lithography tool, with a k1 factor < 0.5, i.e., the half-pitch is less than 0.5*wavelength/numerical aperture. A number of published reports [1-4] have touched upon the fact that for such tight pitches, the line end gaps tend to widen. The proof outlined briefly here with reference to the figure below is actually an alternative formulation to the one given in the appendix of [1].

The pitch is defined by illumination distributed about the ideal angle, with a sine of 0.5*wavelength/pitch. The numerical aperture naturally limits the sine in the perpendicular direction to sqrt(NA^2 – (0.5 wavelength/pitch)^2) or sqrt(NA^2 – (0.25 wavelength/half-pitch)^2). From the Fourier diffraction theory, related to the well-known single-slit aperture diffraction problem [5], the minimum width of the gap, correlating to this maximum perpendicular sine is 0.5*wavelength/sqrt(NA^2 – (0.25 wavelength/half-pitch)^2). This is plotted as the blue curve in the figure. k1 is defined as the size divided by wavelength/numerical aperture.

From the graph, it is noted that the gap will always exceed the half-pitch (indicated by the black dotted line in the figure), when the half-pitch is less than 0.5 wavelength/numerical aperture. Moreover, the smaller the half-pitch, the larger the minimum gap. This brings up some basic issues. First, the device density cannot improve much, as the widening gap offsets the shrinking line pitch. Additionally, for the metal interconnections, the next layer with the same line pitch cannot make the connections, as the required gap is too wide. Consequently, there is a need to use separate exposures to cut the line [1] or even stitch the perpendicular features at the same target pitch [2]. The latter has been promoted for bidirectional layouts by ASML as double dipole exposure lithography [2,6]. On the other hand, for unidirectional layouts, separate line cuts have become the norm, due to less stringent overlay requirements.

References

[1] https://semiwiki.com/lithography/285085-lithography-resolution-limits-line-end-gaps/

[2] M. Eurlings et al., Proc. SPIE 4404, 266 (2001).

[3] M. Burkhardt et al., Proc. SPIE 7274, 727404 (2009).

[4] E. van Setten et al., Proc. SPIE 9661, 96610G (2015).

[5] B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics, John Wiley & Sons, 1991, pp.128-129.

[6] S. Hsu et al., Proc. SPIE 4691, 476 (2002).

Related Lithography Posts


Verification IP proves essential for PCIe GEN5

Verification IP proves essential for PCIe GEN5
by Tom Simon on 12-08-2020 at 6:00 am

PCIe Verification IP

PCI Express (PCIe) has become an important communication element in a wide range of systems. It is used to connect networking, storage, FPGA and GPGPU boards to servers and desktop systems. It has progressed a long way from its initial parallel bus format. Its evolution to a serial point to point configuration has been accompanied with increased speeds and throughput. PCIe GEN5 boosts individual lane speeds to 32 GT/s, doubling the previous version’s speed. Along with this speed improvement come a number of changes to the specification. Complex new functionality and added clock rates mean that verification has become more difficult. To address these verification needs, Mentor has developed Questa Verification IP (QVIP) for PCIe GEN5 that fully complies with the new spec and can be configured as needed.

Mentor has a white paper titled “Tackling Verification Challenges for PCIe GEN5” that discusses what is new in GEN5 and introduces the case study of a product developed using their QVIP.  The authors, Mentor’s Akshay Sarup and Anritsu’s Kazuhiro Fukinuma talk about the five major areas where the specification has changed and then reviews the addition of PCIe GEN5 support to Anritsu’s MP1900A signal quality analyzer.

PCIe GEN5 makes allowances for the PHY layer to be used for other protocols. The PHY can support PCIe and one or more other protocols at the same time. Negotiation for the use of alternative protocols requires the support of Modified TS1/TS1 Ordered Sets (OS) alongside standard TS1/TS2 Ordered Sets. This negotiation occurs at the same time as Lane number negotiation. If the connected port does not support the Modified TS1/TS2 OS, the training reverts to the standard TS1/TS2 OS and switches to the appropriate encodings.

Precoding is available at the 32 GT/s rate to help mitigate issues caused by the -36 dB insertion loss target. With higher decision feedback equalizer tap ratios, certain patterns such as alternating 1s and 0s are susceptible to error propagation on a single bit flip. This can result in CRC errors that can potentially break the CRC detection capability. Precoding performs XORs with the prior bit. Precoding is requested by a receiver prior to entering 32GT/s rate and will stay on for the entire rate setting, until the next equalization.

Equalization is another thing that was changed. Rather than stepping up sequentially through the slower transfer rates, the specification provides for the ability to bypass the slower speeds and move directly to 32GT/s for equalization. Of course, if the equalization fails, the speed is dropped to 8GT/s for equalization. Ports can alternatively advertise that no equalization is needed, and previous saved settings will be applied. Together these additions can shave significant time off the equalization process.

The above changes necessitated changes in the Link Training Status State Machine (LTSSM). Previously reserved bits are now used for new functionality. Also, the sequence of events during training have changed as a result of these new features and the higher operating speed. Added to his are new testability features that allow more flexibility in setting up loopback tests.

Verification IP

Anritsu, one of the leading companies providing PCIe test and measurement solutions, used Mentor’s PCIe GEN5 QVIP to accelerate the development of new features to support GEN5. With the QVIP’s high configurability they could easily set up parameters any way they needed to set up testbenches. Using assertions, they were able to eliminate a number of subtle design issues early on in the process. The Mentor QVIP includes built-in sequences to allow a quicker ramp up time for users.

The white paper on Verification IP from Mentor does a good job of highlighting the new features in GEN5. It also shows how it was used to provide comprehensive verification by the Anristsu team to deliver a robust signal quality tester. As PCIe grows more complex and sophisticated, the case for Verification IP is becoming pretty clear. The white paper is available for download on the Mentor website.


The Practitioners View of DAC – Design, IP and Embedded

The Practitioners View of DAC – Design, IP and Embedded
by Mike Gianfagna on 12-07-2020 at 10:00 am

The First DAC

Next year will mark the 58th year for the Design Automation Conference. It’s hard to wrap your head around the fact this event dates back to 1964, when rock ‘n roll was new, cars were big and computers were even bigger. In its early days, the event was called the Design Automation Workshop. Pictured above is the cover of the very first proceedings. It’s no longer available on Amazon unfortunately. Throughout its storied life, DAC has presented some of the most advanced and important innovations that have shaped the technology world around us. The conference began and continues today as a place to present high quality, cutting edge research. Over the years, exhibits were added to showcase the results of this research. More recently, a series of tracks were added that showcase the practitioners view of DAC – design, IP and embedded. I’d like to focus on this part of the conference.

Ambar Sarkar, Ph.D., NVIDIA Inc.

The Designer and IP tracks have been part of the DAC program since 2010. The Embedded track was added last year. Putting on DAC is a huge undertaking and a great deal of the work involved is done by a group of volunteers from industry and academia. Their willingness to give back is truly noteworthy. I recently had a chance to chat with one of those folks – Ambar Sarkar from NVIDIA, who is the chair for the Designer IP and Embedded tracks on the DAC Executive Committee.

Ambar began by explaining the structure of this part of the DAC program. There are front-end and back-end Designer tracks as well as an IP track and an Embedded systems track. Each track contains submitted work as well as special invited sessions. In reality, there is some grey area between front-end/back-end and IP design which is fine. Ambar chairs a committee that helps to sort all this out. You can learn more about the workings of the IP committee in this interview I did with Randy Fish. You can see who’s who on the DAC Executive Committee here.  

I spent a bit of time discussing the submitted work portion of these tracks with Ambar. As I mentioned earlier, DAC is a high-profile, prestigious place to present your work. It is highly regarded, and the work presented at DAC is often cited and used broadly in semiconductor and EDA. The Designer, IP and Embedded tracks share this spotlight, but there is an important difference. Submitting a technical paper to the DAC Research track takes a fair amount of work. The final submission includes a technical manuscript and a presentation, with peer-review vetting along the way. The deadline for submitting a technical paper to the Research track has already passed. The deadline is traditionally in November of each year.

In 2020, overall submissions to the Designer and IP tracks rose 15%, continuing a steady three-year rise: 160 paper submissions in 2018, 170 in 2019 and 197 in 2020.  This blog post will provide more information on the 2020 Designer and IP track submission trends.  Ambar said he is confident there will continue to be a rise in submissions in these tracks. 

The requirements for submission to the Designer, IP and Embedded tracks is a bit different. Here, an abstract of approximately 100 words and up to six PowerPoint slides are needed and the submission deadline is later than the Research track.  Submissions are peer-reviewed by a number of industry/domain experts to ensure quality, but the submission process for this part of the conference has been streamlined. In spite of the simpler process, I can tell you the work presented at these tracks receives a lot of attention due to the very high-quality technical content. Over the years, I have been involved in many Designer track and IP presentations and it has been a very rewarding experience.

Consider that the development of new IP along with its software and integrating it into a new SoC is a fundamental innovation engine for the semiconductor industry. The Designer, IP and Embedded tracks provide a spotlight on that work at a very high-profile conference. If you’re proud of something you’ve worked on this past year with a customer or internally at your company, you should definitely consider a submission.  The work is reasonable, and the reward is significant.

Submissions to these tracks are open now.  The deadline for submission is January 20, 2021. Think about what you’d like to present and submit a proposal before the holidays. You’ll be notified of acceptance between March 10 and 18, 2021. There are clear instructions on how to submit your work on the DAC website, including a detailed outline for how to develop your six slides. Past companies who presented are also shown there. It’s quite an impressive list.  You want to be on that list. Here are the links:

Check it out to learn about the practitioners view of DAC – design, IP and embedded.


How Intel Stumbled: A Perspective from the Trenches

How Intel Stumbled: A Perspective from the Trenches
by Daniel Nenni on 12-07-2020 at 6:00 am

Stacy and Bob Intel SemiWiki

Bloomberg did an interview with my favorite semiconductor analyst Stacy Rasgon on “How the Number One U.S. Semiconductor Company Stumbled” that I found interesting. Coupled with the Q&A Bob Swan did at the Credit Suisse Annual Technology Conference I thought it would be good content for a viral blog.

Stacy Rasgon and Bob Swan

Stacy Rasgon is an interesting guy and a lot like me when it comes to offering blunt questions, observations, and opinions that sometimes throw people off. As a result, Stacy is not always the first to ask questions during investor calls and sometimes he is not called on at all which is the case for the most recent Intel Call.

Stacy is the Managing Director and Senior Analyst, US Semiconductors, for AB Bernstein here in California. Interestingly, Stacy has a PhD in Chemical Engineering from MIT, not the usual degree for a sell side analyst. Why semiconductors? Stacy did a co-op at IBM TJ Watson Research Center during his post graduate studies and that hooked him.

I thought it was funny back when Brian Krzanich (BK) was CEO of Intel. BK has a Bachelor’s Degree in Chemistry from San Jose State University and he was answering questions by an analyst with a PhD from MIT. The current Intel CEO Bob Swan is a career CFO with an MBA so maybe that explains the communication issues.

In the Bloomberg interview the focus was on the delays in the Intel processes starting with 14nm, 10nm, and now 7nm. Unfortunately they missed the point. In the history of the semiconductor industry leading edge processes were more like wine where in the words of the great Orson Wells “We will sell no wine before its time”. Guided by Moore’s Law, Intel successfully drove down the bumpy process road until FinFETs came along.

The first FinFET Process was Intel 22nm which was the best kept secret in semiconductor history. We don’t know if it was early or late since it was not discussed before it arrived. 14nm followed which was late due to defect density/yield problems. We talked about that on SemiWiki quite and I had a bit of a squabble with BK at a developer conference. I knew 14nm was not yielding and he said it was only to retract that comment at the next investor call. Intel 10nm is probably the most tardy process in the history of Intel and now 7nm is in question as well.

The foundries historically have been 1-2 nodes behind Intel so they got a relative pass on being late with new processes up until 10nm when TSMC technically caught Intel 14nm.

Bottom line: Leading edge processes use new technology and materials which challenges yield from many different directions. This is a very complex business so it’s extremely difficult to predict schedules because “you never know until you know”. So, try as one might, abiding by Moore’s Law in the FinFET era is a fool’s errand, absolutely.

The other major Intel disruption is the TSMC / Apple partnership. Apple requires a new process each year which started at 20nm (iPhone6). As a result TSMC now does half steps with new technologies. At 20nm TSMC introduced double patterning then added FinFETs at 16nm. At 7nm TSMC later introduced limited EUV and called it 7nm+. AT 5nm TSMC implemented full EUV (half steps).

This is a serious semiconductor manufacturing paradigm shift that I call “The Apple Effect” TSMC must have a new process ready for the iProduct launch every year without fail. Which means the process must be frozen at the end of Q4 for production starting in the following Q2. The net result is a serious amount of yield learning which results in shorter process ramps and superior yield.

The other interesting point is that during Bob Swan’s Credit Suisse interview he mentioned the word IDM 33 times emphasizing the IDM advantage over being fabless. Unfortunately this position is a bit outdated. Long gone are the days when fabless companies tossed designs over the foundry wall to be manufactured.

TSMC, for example, has a massive ecosystem of partners and customers who together spend trillions of dollars on research and development for the greater good of the fabless semiconductor ecosystem. There is also an inner circle of partners and customers that TSMC intimately collaborates with on new process development and deployment. This includes Apple of course, AMD, Arm, Applied Materials, ASML, Cadence, and Synopsys just to name a few.

Bottom line: The IDM underground silo approach to semiconductor design and manufacture is outdated. It’s all about the ecosystem and Intel will learn this first hand as they increasingly outsource to TSMC in the coming process nodes.