webinar banner2025 (1)

A cautionary tale for the digital economy

A cautionary tale for the digital economy
by Terry Daly on 04-08-2020 at 10:00 am

TSMC Wafer

COVID-19 underscores the importance of US-based production for strategic industries

The COVID-19 pandemic has drawn intense focus on the need to repatriate pharmaceutical manufacturing back to the United States.  The increased awareness that a strategic adversary manufactures or controls up to 80% of the active pharmaceutical ingredients used to produce drugs has shocked the nation’s sensibility. What other strategic industries are at risk of offshore dependency? Foremost on the list is electronics, most especially the semiconductor industry.

No industry has a more pervasive and strategic impact on the economy than semiconductors. “Chips” are the essential infrastructure of the digital economy, embedded in every connected hardware platform.  They drive consumer electronics, telecommunications, media, the internet, the cloud, transport, medical discovery and medical devices, education, finance, energy, agriculture, government and more. US national security runs on chips, including defense, intelligence, cyber and space. To safeguard the digital economy during conflict or national crisis, the US Government must take a more proactive posture to guard against offshore dependency and supply disruption.

The semiconductor industry is a complex global network of companies including product design, design tools and intellectual property (IP), chip manufacturing, outsourced packaging and test (OSAT) and semiconductor equipment and materials. “Fabless” firms solely design chips. “Foundries” focus exclusively on manufacturing. The “Integrated Device Manufacturers” (IDMs) both design and manufacture chips. The US has capable manufacturers in Intel, TI, On Semiconductor and Micron, but as IDMs they only manufacture products they design.  With the benefit of their decades-long investments in the United States, for the products that they make, the US is well prepared.

However, for many other high-volume chips used across essential digital platforms, the US is critically dependent on offshore factories.  This is evident in the supply of leading-edge technologies and the equipment and materials central to the manufacturing process. Fabless product companies such as Qualcomm, AMD and Xilinx have virtually all leading-edge products (7 nanometer and below) manufactured offshore, mostly at TSMC in Taiwan. The commercial Foundries in the US, GLOBALFOUNDRIES and TowerJazz, do not offer leading edge technology. AMKOR, the only US-based commercial-scale OSAT, has all its manufacturing located offshore. Dutch equipment company ASML holds a virtual monopoly on lithography equipment, arguably the most critical step in chip production.  Key rare earth raw materials such as cobalt, gallium, tungsten and germanium are critical to chip production. China is estimated to hold at least 80% of world production of these and other rare earth materials.

A prolonged denial of access to suppliers in Taiwan, South Korea, Japan and elsewhere in Southeast Asia during time of crisis would severely risk supply of chips for US communications networks, data centers, medical devices, financial systems and the electric grid.  While the US has effective mitigation to assure the supply of critical parts for national security through the Department of Defense’s Trusted Foundry, its scope and scale are insufficient to address our nation’s other critical infrastructure needs. United States policy should target self-sufficiency for both national security and critical infrastructure needs.

Domestic “burst” capacity in chip manufacturing is a much tougher task than production of ventilators, masks and personal protective equipment – as important as these are to our current national emergency.  The issue is time.  To build, equip, qualify and ramp a new chip factory requires minimally up to two years, time well beyond that needed to meet an emergency.  Existing factories at Intel, GLOBALFOUNDRIES, TI, On Semiconductor and Micron could be re-purposed on a quicker timetable, but these companies would need access to the IP and know-how of offshore competitors such as TSMC, and potentially additional equipment, to build non-IDM and leading edge products. Alternately, Fabless firms could re-spin their product designs to be built on US-based IDM process technologies, a non-trivial effort requiring several months to achieve volume.  Invoking the Defense Production Act can trigger action but not solve the issue of time.

Decisive US policy is needed to re-balance the equation between government-led readiness and the continuation of an un-bridled free market approach to semiconductor manufacturing.

First, the Administration needs to adopt policy proposals that enhance, not diminish, the competitiveness of our leading semiconductor companies. For example, the pending decision to invoke the “foreign direct product rule” to inhibit supply of chips to Huawei will inflict severe financial damage to US equipment suppliers and hand hard-earned market share to Japan and South Korea.  Next, US should fund aggressive financial incentives for both US and global manufacturers (with focus on TSMC and Samsung) to build or expand leading-edge factories in the US.  This plan should include subsidies for capital and operating expenditures sufficient to eliminate current cost disadvantages versus Southeast Asia as well as competitive tax incentives. Creative public-private models can achieve the best of the innovative and efficient private sector while assuring adequate, responsive US-based supply.  Third, US should fund a Strategic Semiconductor Reserve comprised of US Government priority access to domestic “burst” manufacturing capacity, physical stockpiles of rare earth materials, and in concert with industry, a government-funded virtual finished goods inventory of chips and other components found in essential infrastructure platforms.  Finally, the US should expand advanced research funding to assure US leadership in Artificial Intelligence, 5G, quantum computing and other emerging technologies.

Whether in a Phase 4 “infrastructure” bill or a normal appropriation cycle, The President and Congress must think expansively in redressing the strategic risk inherent in the current US posture.  One prefers that a free trade regime govern independent investment decisions by US and global corporations.  But as COVID-19 has brought to light, establishing national readiness for exceptional circumstances requires implementation of pro-active public policy ahead of crisis.

Terry Daly is a retired semiconductor industry executive


Best Practices for IP Reuse

Best Practices for IP Reuse
by Bernard Murphy on 04-08-2020 at 6:00 am

Reuse

As someone who was heavily involved with rules for IP reuse for many years, I have a major sense of déja vu in writing again on the topic. But we (in SpyGlass) were primarily invested in atomic-level checks in RTL and gate-level designs. There’s a higher level of best practices in process we didn’t attempt to cover. ClioSoft just released a white paper (authored by Jeff Markham) on that topic and forwarded to me by my old Atrenta buddy Simon Rance (now VP Marketing at Cliosoft).

Jeff covers a lot of territory, on creation of IP and evaluation of commercial IP, with his own views on what the industry could do to make these easier. I’m more drawn to the question of what it takes to make an IP reusable because that was a hot topic when we started. There was a book called the IP Reuse Methodology Manual, then considered the bible for what you should and should not do. There’s was also a lot of debate about how practical it was to invest in making internally developed IP reusable.

I heard fairly generally that the effort to make an IP reusable is significant – maybe 3X the original cost of developing the IP. This would be to get it to a point that you could search a library by function, process, parametrics, documentation, that sort of thing, to find and compare this IP with other comparable solutions. Then you could download maybe a behavioral and abstract model to check it out in simulation and a floorplan. Then finally download the full thing with all requisite views and other collateral you would need to use it immediately in your design.

Nice idea and some semi design organizations actually organized to support that development. Perhaps some still do but for many it was a luxury they couldn’t afford. Reuse makes a lot of sense when you have a production line and are pumping out a lot of similar designs, which is what we were all expecting in that era of platform-based design.

Unfortunately for many who believed, markets did a 180 on their original dreams of platform-based design and massive reuse. I’m too disconnected from the details these days to make bold pronouncements, but I wouldn’t be surprised to hear that those dreams went up in smoke. That for most design teams today, reusable IP either comes from IP vendors or reuse has devolved to mean “here’s something fairly close that we used in the last design, adapt it as you see fit for the current design”.

In fact much of the reuse I have seen has evolved further to “we built this chip in the last generation, now it’s going to be a subsystem in this new generation”. Which makes a lot of sense when you think about it. That chip was proven in production, which is a pretty decent (though not perfect) stamp of certification. Good enough anyway when you make payroll by shipping product and you don’t have time to rebuild the subsystem.

Which is not to say that we don’t need to follow reuse best practices, with maybe some selectivity. Good engineering is built on good practices and becomes even more essential as we work on larger designs. Even if you’re going to use “copy and adapt” reuse, you still need to find a best candidate to start from, understand the functionality, parametrics, etc., etc. Maybe there’s something closer to what you need that would be a better starting point. Maybe there’s something in there that will fit the bill exactly – stranger things have happened.

Jeff has a good long list of suggestions. You could save yourselves a lot of time by following them. You can read more about Cliosoft HERE.

Also Read

WEBINAR REPLAY: AWS (Amazon) and ClioSoft Describe Best Cloud Practices

WEBINAR REPLAY: ClioSoft Facilitates Design Reuse with Cadence® Virtuoso®

WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®


Lithography Resolution Limits: Paired Features

Lithography Resolution Limits: Paired Features
by Fred Chen on 04-07-2020 at 10:00 am

Lithography Resolution Limits Paired Features

As any semiconductor process advances to the next generation or “node”, a sticky point is how to achieve the required higher resolution. As noted in another article [1], multipatterning (the required use of repeated patterning steps for a particular feature) has been practiced already for many years, and many have looked to EUV lithography as a potential escape from more multipatterning. In reality, the requirement for multipatterning is dependent on the feature to which it is applied. This article is first in a series exploring key cases, to see when multipatterning may be avoided, and if it can’t be avoided, what is most likely the most practical way to do it. The first case to be considered will be the simplest: two features separated by a very small distance. These may represent two neighboring vias located at the ends of two separate long metal lines, for example (Figure 1).

Figure 1. Two neighboring vias at the ends of two long metal lines represent the basic case of two small features separated by a small distance.

The Rayleigh criterion

The Rayleigh criterion is a schoolbook formula giving the resolving power of an imaging system, with a given numerical aperture. The radius of the smallest spot that can be focused is given by r = 0.61 wavelength/numerical aperture. The numerical aperture here is basically the radius of the lens divided by the focusing distance (or focal length) [2]. If a second spot is located at the edge of the first spot, i.e., at this radial distance r, then it cannot be resolved. But once it is moved to a further distance, it can begin to be resolved. Hence, this distance defines a resolution limit for the minimum distance between two spot images (Figure 2).

Figure 2. The Rayleigh criterion defines the resolution between two features.

For an immersion lithography system, the wavelength is 193 nm, and the numerical aperture is 1.35, giving a minimum interspot distance of 87 nm. This distance might be reduced a little by the use of attenuated phase shifting masks; the improvement depends on the transmssion of the phase-shift mask [3]. For an EUV system, the wavelength is less narrowly defined (it is a pretty wide band), but nominally it is taken to be 13.5 nm, with a numerical aperture of 0.33, giving a minimum interspot distance of 25 nm. Unfortunately, to date, EUV lacks phase-shifting technology. A fundamental barrier is EUV’s relative lack of monochromaticity (it extends beyond the nominal 13.3-13.7 nm bandwidth), compared to the DUV excimer laser.

Getting below the Rayleigh criterion: Double patterning

To print two features close than the Rayleigh criterion will therefore require some form of double patterning. The simplest case would be to print one of the features using one mask, then the other with a second mask (Figure 3). This approach is often referred to as LELE, an abbreviation for “Litho-Etch-Litho-Etch”. A key drawback of this approach is the obviously critical dependence on the overlay between the two exposures.

Figure 3. The LELE (litho-etch-litho-etch) approach for double patterning, applied to the two vias of Figure 1.

In an alternative form of LELE, the second exposure “cuts” the first exposed feature (Figure 4). In other words, the second exposure has opposite tone to the first. This has somewhat better alignment of the two features, at least vertically in this case.

Figure 4. In this LELE variant, the second exposure cuts the first exposed feature, being opposite in tone.

Getting below the Rayleigh criterion: Print many and trim

Alternatively, it is possible to print an array of the features at a tight pitch, and remove (“trim”) the ones which are not required with a second mask exposure (Figure 5), or even leave them as “dummy” features. Printing arrayed features allows the space beween features to be reduced to less than 0.6 wavelength/numerical aperture (see Appendix). The second mask exposure overlay requirement would actually be a little relaxed compared to the above LELE case, but it still has to be tighter than the distance between features.

Figure 5. The array plus trim approach, applied to the vias of Figure 1. The shaded area represents a blocking mask to prevent the previously printed array features from being further etched.

No EUV advantage for sub-25 nm isolated pairs?

A careful consideration of the above techniques may make it clear that patterning isolated (or else sufficiently widely separated) pairs of features spaced by less than 25 nm can be done by two immersion exposures (LELE style), one EUV exposure plus one immersion exposure (array plus trim), or two EUV exposures (LELE or array plus trim). The immersion-only LELE approach is more sensitive to overlay than an array plus trim approach using EUV, but with current capabilities of ~2 nm already established with immersion tools [4], for distances of ~15-20 nm, there is no real advantage in going to EUV for this case.

Appendix: Rayleigh criterion (k1=0.61) vs. k1<0.61

Some of you may have the reasonable question, how does the Rayleigh criterion above not conflict with the low k1 (distance = k1 * wavelength/numerical aperture, k1<0.61) cases commonly encountered in immersion lithography? This will be covered more explicitly in the next article, but for now the question can be answered with reference to Figure 6. Basically, the Rayleigh criterion addresses the smallest image of a point, i.e., the point spread function. For an assembly of features, the point spread function is mathematically convolved with the locations of the features. When the features are spread widely, the width of the point spread function obviously dominates the image. However, when the features are densely packed together, at distances comparable to the Rayleigh criterion or even less, the width of the point spread function no longer impacts the image, but instead the spatial frequency corresponding to the pitch of the features determines the image. The point spread function only contributes to the blur (contrast degradation) of the image.

Figure 6. When the point spread function is convolved with a wide pitch (top) it dominates the image. When it is convolved with a dense pitch of comparable dimensions (bottom), the pitch dominates the image.

More quantitatively, we can examine the image produced by the (coherent) illumination of two points in the object plane. This is shown in Figure 7.

Figure 7. Two points separated by a given distance in the object plane appear wider in the image plane, when the object separation is at the Rayleigh criterion or less.

When the two objects are separated by the Rayleigh criterion or less, the image shows a wider than actual separation. Furthermore, the peak intensities are reduced and the central region shows higher intensity throughout. It is due to the growing influence of the neighboring point’s diffracted field in the image plane.

References

[1] https://www.linkedin.com/pulse/how-semiconductor-industry-got-itself-multipatterning-frederick-chen/

[2] B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (John Wiley & Sons, 1991), p.131.

[3] R. Socha et al., “Resolution Enhancement with High Transmission Attenuating Phase Shift Masks,” Proc. SPIE 3748, 290 (1999).

[4] https://www.evaluationengineering.com/home/article/13012512/asml-ships-new-immersion-lithography-platform


Synopsys is Changing the Game with Next Generation 64-Bit Embedded Processor IP

Synopsys is Changing the Game with Next Generation 64-Bit Embedded Processor IP
by Mike Gianfagna on 04-07-2020 at 6:00 am

ARC HS5x HS6x block diagram

Synopsys issued a press release this morning that has some important news – Synopsys Introduces New 64-bit ARC Processor IP Delivering Up to 3x Performance Increase for High-End Embedded Applications. At first glance, one could assume this is just an announcement for some new additions to the popular ARC processor family. While that is true, there’s a lot more to the story. The newly announced processor IP has the potential to change the way embedded systems are designed.

I had the opportunity to chat with Mike Thompson, senior product marketing manager, ARC Processors at Synopsys. Before I get into more details, a bit about Mike. He drives the definition and marketing of the high-end ARC microprocessor products at Synopsys. Mike knows something about this market, having been involved in design and support of microprocessors, microcontrollers, IP cores, and the development of embedded applications and tools for over 30 years at places like MIPS, ZiLOG, Philips, AMD, and Actel. He holds a commanding view of the market and its needs.

One more important item before we dig into the details. Today, April 7, Mike is presenting the newly announcement processor IP at the Linley Spring Processor Conference, which is now a virtual event. His presentation is from 11:40 AM – 12:00 PM, Pacific time. I highly recommend you join that event if you’re registered for the conference.

“The growing complexity of high-end embedded systems such as in networking, storage, and wireless equipment demands greater processor functionality and performance without sacrificing power efficiency,” said Mike Demler, senior analyst at The Linley Group. “Synopsys’ new ARC HS5x and HS6x CPUs meet those needs, but they also provide the configurability and scalability needed to support future embedded-system requirements as well.”

OK, so why do I think is this announcement is a big deal? First, the basics. The following sentence from the press release says it well:

“The 32-bit ARC HS5x and 64-bit HS6x processors, available in single-core and multicore versions, are implementations of a new superscalar ARCv3 Instruction Set Architecture (ISA) and deliver up to 8750 DMIPS per core in 16-nm process technologies under typical conditions, making them the highest performance ARC processors to date.”

Setting a new performance bar is important, but that’s just the beginning of the story. These new processor cores extend capabilities in many directions, creating a new “canvas” if you will for embedded applications. Some capabilities make the job easier; some create fundamentally new opportunities. Regarding making the job easier, the 64-bit processor supports up to a 52-bit physical address space, which can directly address up to 4.5 petabytes. That’s a lot of room for innovation and significantly larger than anything else available.

There is support for up to 12 coherent CPU cores per processor cluster with L1 cache coherency. Mike explained that most applications today support four CPU cores, with a few offering up to eight. 12 cores, without degraded throughput, opens up the opportunity for new applications. The processors can be configured for real-time operation or with an advanced memory management unit (MMU) that supports symmetric multiprocessing (SMP) Linux and other high-end operating systems.

Another noteworthy capability is support for up to 16 user-implemented hardware accelerators with memory coherency. Hardware accelerators represent the “secret sauce” for many applications. Mike explained that few processors support hardware accelerators directly and those that do only support one. 16 changes the game. Mike listed some more features that make these processor cores extremely flexible and provide a lot of power optimization opportunities:

  • Support for asynchronous clocking for all CPU cores and hardware accelerators that allows the cores to be clocked at different speeds than the interconnect and other cores in the processor cluster
  • Support for individual power domains for all CPU cores, all hardware accelerators and the interconnect itself
  • Support for the industry standard ACE interfaces to easily connect to a network-on-chip (NoC) that might be implemented in the SoC
  • Coherency between the CPU cores with snooping, and coherency between the hardware accelerators with support for snooping
  • Cluster shared memory, under software control that can be used to move data between the CPU cores, the hardware accelerators and the NoC
  • High-bandwidth, low latency access to up to 16 MB of closely coupled memory (CCM) that is shareable between the CPU cores and hardware accelerators, providing single cycle access to local memory

At this point in our discussion, I felt that the application of these new cores was only limited by the imagination of the designer. This technology is quite complex – a direct quote from Mike drives the point home:

“We verify this IP with a few trillion vectors on a 100,000-server farm.”

I was starting to get dizzy. The new processors are backward compatible the EM, HS3x and HS4x processors, very convenient. Software development is supported by Synopsys’ ARC MetaWare Development Toolkit that includes an advanced C/C++ compiler optimized for the processors’ superscalar architecture, a multicore debugger to debug and profile code and a fast instruction set simulator (ISS) for pre-hardware software development. A cycle-accurate simulator is also available for design optimization and verification. Open-source software support for the processors includes the Zephyr real-time operating system, an optimized Linux kernel, the GNU Compiler Collection (GCC), GNU Debugger (GDB), and the associated GNU programming utilities (binutils).

ARC Processor EXtension (APEX) technology that enables the support of custom instructions is also included with these processors, as it is with all ARC processors. Mike mentioned that something like 80% of ARC users take advantage of this capability.

We also discussed the need many users have to understand what kind of performance they can expect from synthesizable IP in a particular technology. Mike described a very valuable service whereby Synopsys can run a benchmark implementation of a customer’s design in a target technology to reduce risk.  I know from first-hand experience how important this can be.

The stated application areas for the new IP are: solid state drives (SSDs), wireless baseband, wireless control, home networking, cloud networking and edge networking. Given the strong support for hardware accelerators, I’ll be interested to see what new applications are invented by the customer base. As I mentioned, please connect to Mike’s presentation at the Linley Spring Processor Conference if you can.

Also Read:

Security in I/O Interconnects

Synopsys Tutorial on Dependable System Design

Use Existing High Speed Interfaces for Silicon Test


Webinar on Detecting Security Vulnerabilities in SoCs

Webinar on Detecting Security Vulnerabilities in SoCs
by Tom Simon on 04-06-2020 at 10:00 am

Secure development Lifecycle for SOCs

As more security related capabilities are added in hardware it is changing the effort required to ensure that SoCs are not prone to attack. Hardware has the initial appeal of creating physical barriers to attack, yet it presents its own difficulties. For one thing, a flaw in a hardware security feature is much harder to fix in the field, which makes it an appealing target for bad actors. Hardware related attack surfaces offer a larger ROI and hackers know that because they are hard to fix, they can exploit them for a longer period of time.

A webinar titled “Detecting Security Vulnerabilities in a RISC-V Based System-on-Chip” by Dr. Nicole Fern of Tortuga Logic covers the issues of SoC security by offering several examples of vulnerabilities that have been discovered, the various ways that SoCs can be compromised and approaches that can be used to improve the process of development and testing SoCs so that security issues can be significantly reduced.

Nicole goes over the ‘Nail Gun’ attack that allows code running on an insecure core to accomplish privilege escalation on another core by activating a debug mode strictly through software. No physical access to the system is required. Another of her examples is a boot tampering attack on Cisco routers where the bit-stream for an FPGA was stored in unprotected memory and could be modified by attackers. She cited a documented BLE software stack issue with TI chips that allows unsecure code to process Bluetooth packets.

To prevent hardware related attacks Nicole discusses three system level security goals. Each block or IP must not contain vulnerabilities of its own. Next, the process of SoC integration must not introduce vulnerabilities. Examples of this are improper management of debug and test interfaces and accidentally grounding a privilege bit in a peripheral interface.  Lastly, software configuration and usage of hardware features must be correct. For instance, assets can be mismanaged during secure boot, MPUs can be misconfigured, or on-chip bus interfaces can be improperly programmed.

Nicole advocates a Secure Development Lifecycle (SDL) for hardware to help avoid security related problems in finished designs. At each stage of SoC and system development there are corresponding steps that should be taken. At the root of any security plan is threat modeling and creating a security specification. Naturally this is followed by security verification.

These steps often rely on in-house or consultant security expertise. There are several repositories of known vulnerabilities such as Mitre CVE and CWE that provide essential information for securing SoCs. At each step there needs to be design and architecture review. Formal verification methods can be applied. Simulation based and directed negative testing approaches can be used. Lastly, post silicon penetration testing is necessary. These approaches have some shortcomings which Nicole discusses in the webinar. She concludes that there need to be ways for guided security requirement creation from weakness databases, such as CWE, that are approachable for non-experts.

At the end of the webinar Nicole details how the Tortuga Logic Radix technologies offer a method that can be used by existing design teams to efficiently and effectively execute an SDL for hardware. Radix offers both simulation and emulation to detect potential risks in design. She makes the important point that, unlike functional verification that cares about data values, security simulation cares about data flow, transmission and access.

The webinar is an eye-opening look at how SoC security can be compromised even if the best hardware logic design methods are used. Security design is in many ways orthogonal to functional design. At the very least it represents another perspective of what must be considered during the design process. Anyone who cares about SoC based system security should view the webinar which will be on Tuesday April 14th at 10AM PDT.


What’s New in CDC Analysis?

What’s New in CDC Analysis?
by Bernard Murphy on 04-06-2020 at 6:00 am

Validating assumptions in CDC constraints

Synopsys just released a white paper, a backgrounder on CDC. You’ve read enough of what I’ve written on this topic that I don’t need to re-tread that path. However, this is tech so there’s always something new to talk about. This time I’ll cover a Synopsys survey update on numbers of clock domains in designs, also an update on ways to validate CDC constraints.

First the survey, extracted from their 2018 Global User Survey. There are still some designs, around 31%, using 5 or less clock domains. The largest segment, 58% have between 21 and 80 domains. And as many as 23% have between 151 and a thousand domains. Why so many? Some of this will continue to be thanks to external interfaces of course.

Clearly a lot will be a result of power management, running certain functions faster or slower to tradeoff performance against over-heating. And a growing number, prominent in really large designs such as datacenter-bound AI accelerators, are so large that it is no longer practical to try to balance clock trees across the whole design. Now designers are using GALS techniques in which local domains are synchronous, and crossings between these domains are effectively asynchronous.

All of which means that CDC analysis, like everything else in verification, continues to grow in complexity.

Turning to constraints, I’ve mentioned before that CDC depends on having info about clocks, naturally, and the best place to get that info is from SDC timing constraints. You have to build those files anyway for synthesis, timing analysis and implementation. Might as well build it right in one set of files and use that set everywhere, including for CDC. That’s a lot trickier than it sounds for a CDC tool if you’re not already tied into the implementation tools. Getting the interpretation right for Tcl constraints can be very challenging. Not a problem for Synopsys of course.

Along those lines, it’s a lot easier to figure out if one clock is a divided-down or multiplied-up version of another if you understand the native constraints. Which helps a lot with false errors. No need to report problems between two clocks if one is an integral multiple of the other.

Other prolific generators of false errors are those pesky resets, configuration signals and other signals which switch very infrequently, usually under controlled conditions allowing sufficient time for the signals to settle. In fact, at Atrenta we found that in many cases these could be the dominant source of noise in CDC analysis.

Static analysis has no idea these signals are in any way special. They see them crossing from one clock domain to another and assume they have to be synchronized like every other signal. That would be a waste in most cases – of power, area, latency, something that doesn’t need to be wasted.

So we invented quasi-static constraints (honestly I don’t know if we truly invented the concept, I just don’t know that we didn’t), to type signals that could be ignored in CDC analysis. These have no meaning in implementation so won’t be cross-checked, but they are amazingly effective; noise rates plummet. There’s just one problem. Because they’re not cross-checked elsewhere, if you mislabel a signal as quasi-static, you could miss real errors.

Synopsys figured out a way to cross-check meta-constraints like these. They convert them into SVA assertions which you can then pull into simulation regressions to check that the claim really holds up. This is diagrammed in the opening figure. Pretty neat. I think Synopsys really has CDC verification covered thanks to this extension. Click here to read Synopsys’ white paper.

Also Read:

SpyGlass Gets its VC

Prevent and Eliminate IR Drop and Power Integrity Issues Using RedHawk Analysis Fusion

Achieving Design Robustness in Signoff for Advanced Node Digital Designs


I Cancelled My Flight

I Cancelled My Flight
by Roger C. Lanctot on 04-05-2020 at 10:00 am

I Cancelled My Flight

Three weeks in to the current period of COVID-19 “social distancing” guidelines I have cancelled already-booked flights to Barcelona (cancellation of Mobile World Congress), Austin, Tex. (cancellation of SXSW), and London (cancelled company meeting). So it seemed logical that I’d cancel my flight to San Diego for the now-cancelled International Bridge Tunnel Turnpike Association gathering, but for some reason I toyed with the idea of taking the flight on a lark.

I finally decided to cancel this flight this morning after speaking with a friend Friday about my thought process. My friend proceeded to disabuse me of the notion that taking such a flight would have been anything other than a life-threatening proposition. He went on to justify his stance by sharing multiple stories of friends and family currently suffering under either positive COVID-19 diagnoses or hospitalized with debilitating symptoms.

Oddly, I had already questioned the wisdom of at least two colleagues pondering international flights – one for a vacation to Puerto Rico and one for a wedding in the Dominican Republic. The vacation was delayed. The wedding was cancelled. As I told both of these colleagues in an irritatingly raised voice: “No one is flying anywhere!” Yet, I too, was “weighing my options.” Sad.

Thank goodness for my friend. My friend described how he was driving from Detroit to Dulles Airport, located near me, to meet his daughter returning from an exchance program in Nigeria. Turns out the program, originally scheduled to be completed in May, was terminated prematurely a few weeks ago, a termination that was quickly followed by a closing of the only available airport. Cue terror and a call to the State Department.

Moreover, this friend reminded me what I already knew from stories seen on TV, heard over the radio, or read about in the newspaper: that anyone with severe enough COVID-19 symptoms as to require hospitalization had to be prepared to say their final good-bye’s when checking in at the hospital – as no visitors would be allowed. This is not to say that hospitalization is a for-sure death sentence, but, by now, we have all become acquainted with the many tales of patients dying alone or left with a tenuous Facetime lifeline in their final hours.

For me, the lesson was that everyone should, like me, reach out to friends and acquaintances to catch up on events and spread good will and empathy at this stressful time – but, mainly, to get a sanity check. My COVID-19 prescription is for you to call your friends until you find one – or more – with a personal experience of a friend, co-worker, or family member touched by COVID-19. Only in this way can we begin to come to terms with the magnitude of the crisis in this time of disconnection.

SOURCE: Strategy Analytics survey shows between 20% and 25% of respondents, globally, report knowing someone personally diagnosed with COVID-19.

It’s essential that we all reach out. We need to make an effort to connect to overcome and bridge our social isolation. The mayhem unfolding in some hospitals is happening out of sight, the struggling companies and their workers are reduced to statistics. Reach out.

From reaching out myself I have learned of the companies in the automotive industry immediately impacted by shuttered factories as they only get paid as vehicles are made. Some companies get paid when vehicles are sold. Some companies only begin making money once vehicles are being used.

Vehicles are no longer being made, sold, or used. Revenue is not flowing. Governments are stepping in to help.

American citizens are complaining that their rights are being abused under the current social distancing and stay at home orders. Businesses are suing to re-open their doors. The U.S. President laments the cure – social distancing and stay at home – being worse than the illness.

There are many reasons for these measures being taken mainly by local governments. Reach out and find someone you know who has been touched by the disease, someone you trust, and get your own personal firsthand account. I guarantee that after having such a conversation you will no longer be in doubt as to the severity of the crisis and the appropriateness of the response.

One of the stories that caused me to consider actually taking flight appeared on Morning Edition on NPR. A volunteer medical courier shared his story of a flight with five flight attendants and two passengers – himself plus one other.

The story told me two things. First, that the $50B Federal relief package recently signed in to law requires that the airline beneficiaries must continue to fly to all of the cities to which they flew prior to the COVID-19 outbreak. And, second, that the only people flying were those with urgent business – i.e. this is no time to be flying for fun.

The experience of cancelling my fourth flight since the arrival of COVID-19 also reminded me that nothing will be the same again. We have a new normal. Three weeks in to our current social isolation I am seeing barriers being erected in retail outlets to protect checkhout clerks.

SOURCE: Checkout counter with barrier at Lowe’s

Let’s face it. This is the new normal. There will be many accommodations. But if we want to see what those future accommodations are going to look like, we are going to have to stay indoors as much as possible to live to see that day.

In fact, the government turned up the prophylactic recommendations this week suggesting that all people wear facial masks in public. I can report that this recommendation is not being univerally adopted, but I am definitely seeing more handwashing and use of gloves.

So, please, find a friend touched by COVID-19. It shouldn’t be too hard. In fact, it is getting easier to do by the day. Think about the lag time between infection and the appearance of symptoms. Think about the stories of sufferers describing their inability to breath – like being under water. Think about the loved ones saying good-bye at the doors of the hospital, possibly for the last time. Think about yourself.

I have one more flight left to cancel – to Tel Aviv (for now-cancelled Ecomotion). I’ll probably leave this cancellation to the last minute just like the flight I cancelled this morning – and I’ll flirt with the idea of actually flying. But I won’t fly, because life is too short and precious and flying today is too dangerous. Even leaving your house is dangerous today.


UPDATE: Everybody Loves a Winner

UPDATE: Everybody Loves a Winner
by Mike Gianfagna on 04-05-2020 at 9:00 am

Picture1 4

Building a successful startup is hard, very hard. Creating a new category along the way is even more difficult. Those that succeed at both endeavors are quite rare. This is why an upcoming ESD Alliance event is a must-see in my view. The event is entitled “Jim Hogan and Methodics’ Simon Butler on Bootstrapping a Startup to Profitability”. Originally scheduled as a live event at Semi headquarters in Milpitas, CA, this event is now a webcast scheduled for May 1, 2020 from 11:00 AM – 12:00 PM PDT. The event is free.

There’s a lot packed into that title. First, the challenge of bootstrapping a startup. Without the benefit of large sums of other people’s money up-front, getting things off the ground is daunting. Getting the enterprise to profitability is even more difficult. The event will also explore how Methodics created a new category, IP Lifecycle Management (IPLM) along the way. I can tell you from first-hand experience that creating a new category is difficult if you are a large, successful public company. Doing it as a startup is impressive.

I’ve been through many complex SoC design projects that contain IP from a wide variety of sources, both inside and outside the enterprise.  Ensuring that all the moving parts are of high quality, with the right configurations, versions and firmware attached is clearly a requirement for success. Yet, many organizations struggle with some aspect of this process at least once on every project.  Sometimes more than once. I suspect SemiWiki’s readership can share plenty of stories on this topic.

So, hearing about a holistic approach to this problem is definitely worth the time as well. Beyond the education value of this event, there will be entertainment value. I’ve been to many of Jim Hogan’s interview events over the years. Jim has a casual and disarming style that gets folks to actually be themselves and offer new and meaningful insights. Whether he’s exploring artificial intelligence, the latest chip design paradigm or IP management, a Jim Hogan hosted event will be memorable.

I’m particularly interested in learning more about how Methodics did several difficult-to-impossible tasks successfully from Simon Butler. As a co-founder of the company, I suspect he has some great stories.

So, if you want to learn how to build a successful startup, create a new category or do a better job with your next SoC design, you can get it all on Friday, May 1, 2020 from 11:00AM – 12:00PM. You’ll also be entertained and be able to ask questions of Jim and Simon as well. You can register for the webinar here. Don’t miss this one.

Also Read

Avoiding Fines for Semiconductor IP Leakage

Webinar Recap: IP Security Threats in your SoC

WEBINAR: Generating and Measuring IP Security Threat Levels For Your SoCs


Can a Pandemic Stop the Apocalypse?

Can a Pandemic Stop the Apocalypse?
by Roger C. Lanctot on 04-04-2020 at 8:00 am

Can a Pandemic Stop the Apocalypse

The negative impacts of the coronavirus, COVID-19, on the automotive industry continue to radiate out from the closure of factories and dealerships (for vehicle sales, while service operations continue) to employee furloughs and plunging stock prices. At the same time, the global pandemic has begun to undermine the investment rationale behind four core industry-wide initiatives collectively described as “CASE” or “ACES:” i.e. Connected, Autonomous, Shared, and Electrified driving.

These four sectors are frequently identified at industry conferences – remember when we used to attend those? – as the strategic underpinnings of the future of transportation. Even as the mantra of ACES was being embraced by the industry, though, investment bankers were grumbling at the capital expenditures that were flowing to these R&D and trial endeavors that were producing nothing but rivers of red ink.

Those investment bankers may now be breathing heavy sighs of relief as, one by one, the ACES activities of OEMs appear to be falling by the wayside as enthusiasm wains in the face of pandemic impacts or regulatory authorities lose their nerve. The first “horseman” to fall was autonomous vehicles.

Car companies working on autonomous vehicle solutions, such as General Motors, Ford Motor Company, Daimler, and others, have discovered that making a serious effort in AV development can easily require $250M or more per quarter in non-recoverable expenses. Worse yet, this effort is both technology and labor intensive. There is no easy way to rein in these costs and the development activity takes place in a high risk environment that can literally put lives at risk. Just ask Uber.

It is perhaps no surprise that weeks ago most AV operators in the U.S. – which were using paired human safety drivers and driver monitors, terminated their operations for the duration of the “social distancing” enforcement guidelines in the U.S. The valuations of these companies have correspondingly begun their own curve flattening led by Waymo.

Soon after the shut down of these AV activities reports of financial stress began to emerge from peer-to-peet car sharing operators Turo and Getaround. Only yesterday came news that Penske Corporation was shutting down its Penske Dash car subscription service. Can Free2Move’s U.S. operations be far behind? Does anyone want to share a car these days?

Some analysts thought car share and bike and scooter share operators would benefit from an operating environment in which public transportation was either not available or severely curtailed. The reality has been that all sharing has been severely impacted (weeks ago scooter operators Bird and Lime exited the European market) by an environment suddenly bereft of leisure travelers, high volumes of pedestrian traffic, or social activity of any kind. Stay at home government guidance has almost completely extinguished the sharing economy.

Next to fall was electrification as gasoline prices plunged around the world (below $2.00 in the U.S.) and the U.S. government lowered fuel efficiency targets. European car makers, too, have renewed their opposition to aggressive EU CO2 reduction mandates targeting vehicle emissions. Looks like people have generally lost interest in being environmentally conscious.

The last horseman standing is connectivity. It may well be that connectivity is the sole surviving core automotive technology iniitiative that survives the COVID-19 scourge. The industry may abandon autonomous vehicles, shared vehicles, and electrification – but connectivity seems bound to endure.

It is not as if the motivation doesn’t exist for de-prioritizing connectivity. Car companies committing to connecting their cars know they are taking on a nine-figure investment with an uncertain return. But compared to the hundreds of millions of dollars require to support autonomous vehicle, electric vehicle, or shared vehicle development, $100M to set up and maintain a connected vehicle platform looks like a rounding error.

The automotive industry is poised on the threshold of 5G adoption which will bring vehicle-to-vehicle, vehicle-to-infrastructure, and vehicle-to-pedestrian communications into being. Higher speed, longer range, and more reliable vehicle connections will set the stage for advanced cybersecurity and software update solutions as well as improved vehicle location information and collision avoidance.

Not even COVID-19 can stand in the way of the movement to connect cars. For the foreseeable future, the pandemic will continue to wreak havoc with autonomous, electrification, and sharing. Car connections will survive even this apocalypse.


The Great 2020 Mobility Rethink

The Great 2020 Mobility Rethink
by Roger C. Lanctot on 04-03-2020 at 10:00 am

The Great 2020 Mobility Rethink

General Motors’ announced shut down of the Maven car sharing service was hardly a surprise, especially coming in the midst of the COVID-19 pandemic. By the time of its demise, Maven was only operating in four or five cities, down from a high of 17. The venture was doomed from the start, and its departure is a cautionary tale for Toyota, Renault, BMW, Daimler and other auto makers flirting with car sharing.

General Motors told the Automotive News that it learned a lot from Maven and its sister division Maven Gig. The group did collect vehicle data to track charging and usage of Bolt EV’s on the Maven Gig platform and GM got its first real taste of running a connected fleet of vehicles and directly managing a B2C service.

The reason that Maven was doomed from day one is the fact that it was born an orphan. Maven was never part of existing GM operations. Though run by executives drawn from GM’s fleet division it was not part of GM’s fleet operation. Though leveraging OnStar’s connectivity and call center, Maven relied on its own add-on hardware for vehicle access and customer service was separate from OnStar.

Stunning to me from day one was the introduction of the Maven brand itself. Though vaguely clever, GM’s marketing executives must have realized that it would take millions of dollars to build the Maven brand into a recognizable corrolary brand under the General Motors umbrella.

By opting for the Maven brand and standing up a separate team outside of any existing division and without sufficient marketing support or C-level support within the larger organization, Maven was left to subsist on its own with no particular direction. What was Maven’s purpose in life? Was it intended to attract new customers to the brand? Was it intended to open new markets? Was it a platform to trial new technologies and business models? Was it intended to fend off competitive threats? Was it intended to supply cars to Uber/Lyft/Via drivers? Was it intended to provide loaner cars for dealers – potentially displacing existing Uber/Lyft or rental car providers?

None of this was clearly defined. Making matters more confusing was the failure to integrate the Maven strategy with the $500M investment in Lyft – that is now looking more ill-conceived than ever.

A Lyft-Maven one-two punch might have been a powerful launch statement for the car sharing offer. GM could have announced its $500M Lyft investment along with the Maven launch with incentives for drivers to use Maven vehicles at attractive rates. Needless to say that didn’t happen.

Neither Maven nor Lyft brands were leveraged on the OnStar platform or as part of GM’s Global Connected Consumer group, within which OnStar resides. There might have been a scenario where every GM vehicle was enabled for Lyft or Maven activation. In fact, the OnStar brand – in which GM has invested hundreds of millions of dollars over more than two decades – could have been repositioned as a mobility brand.

Such a strategy would have superseded Elon Musk’s plans to launch Tesla Network automated shared vehicles via over-the-air software update. OnStar could have begun its evolution and return to being a brand-defining service for General Motors. But, no, that didn’t happen either.

Without a plan to leverage Maven through GM’s fleet operations, its dealer operations, or as a brand builder and customer finder, Maven meandered into irrelevance and found its meaning as a cost center – a science experiment. Maven couldn’t absorb large volumes of off-lease vehicles; it couldn’t pioneer new car models to be shared from splashy city center showrooms; it couldn’t redefine car sharing.

Maven was a careful, cautious, foray into an arena dominated by local community supported car sharing operators, pure plays, and rental car companies. Rental car companies, in particular, kept the pressure on in what is a low margin marketplace.

Car sharing is a capital intensive business requiring millions of dollars to support wheels on the street. Add in the logistical element of cleaning, insuring, repairing, recovering, and paying for parking and parking infractions and there is precious little cash left over.

The final blow for car sharing operators lacking patience is the inability to drive volume. Yandex is one of the few operators in the world that has found the magic wand to drive customer interest in shared cars – super cheap rates. For GM, such a strategy would only stimulate the bleeding. It would also undermine the company’s primary business: selling cars.

The challenge for car companies is to aggressively promote car sharing when what they really want to do is sell cars. Car companies spend billions of dollars on advertising and incentives to sell cars. Car sharing is a hobby and it is a hobby that, should it become too successful, could threaten the primary business.

In the end, it all comes back to Tesla. Tesla has already solved this problem. Tesla Network has car sharing and ride hailing baked into its long-term plans as an inherent function built into the vehicle. Car sharing and ride hailing, when available directly from Tesla, won’t require additional hardware, a new organization or a new brand – it will simply be an extension of the Tesla value proposition.

This was the opportunity presented by Maven – to be an inherent part of all GM vehicles and an integrated part of the GM value proposition – perhaps under the OnStar brand. This is the core of the missed opportunity.

The shuttering of Maven is an ominous note to be struck within the wider GM family. GM is burning $250M/quarter at its Cruise Automation self-driving car unit in San Francisco.

GM has its own, in-house automated driving initiative in Super/Ultra Cruise. Super Cruise and Ultra Cruise Level 2 automated driving features are already being deployed on GM vehicles – with 22 models expected to be outfitted with the technology by the end of 2021. Super and Ultra Cruise are being developed with unique mapping and localization technology unlike anything Cruise Automation is working on.

GM has clearly indicated that the Cruise Automation path to market is not an aftermarket kit or fitment on mass produced, consumer-oriented vehicles – it is a full-blown robo-taxi. GM’s in-house team – at the Warren Tech Center – is already delivering a value proposition, Super Cruise, for which consumers are paying…happily.

By comparison, Cruise is looking like an expensive distraction and, like the $500M Lyft investment, a gamble. As a separate operation, like Maven, Cruise may find itself cut loose if not shut down – though Softbank, Honda, and other investors will have a say. With the termination of Maven, GM has shown, once again, it is able to cut its losses. In the time of COVID-19, though, we are starting to get close to the bone.