webinar banner2025 (1)

Automating the Analysis of Power MOSFET Designs

Automating the Analysis of Power MOSFET Designs
by Daniel Payne on 06-04-2020 at 10:00 am

ventilator

There’s a world of difference between our smart phones that are battery powered and pack billions of transistors, and power MOSFET devices that can be used in industrial applications, telecom, cloud computing and automotive where they could be run at a few hundred volts and up to 80A of current. I’ve read about one power MOSFET company called Monolithic Power Systems, Inc. (MPS) because they were in the news recently with an open-source ventilator to help out during the COVID-19 pandemic. Now that’s a noble goal to have.

MPS Open-Source Ventilator

MPS also uses some EDA tools to help automate their IC design process of power MOSFET devices, and back in June 2019 they talked about using Polas from Empyrean. I followed up on WebEx with Kyle Tsai at Empyrean to see what Polas had to offer, and then better understand the design challenges.

Designers of power MOSFET devices have a handful of challenges, like:

  • Reaching a low Rds value to meet spec
  • Sufficient metal interconnections
  • Package and pad locations that work
  • Vias and contacts that allow large, peak currents
  • Meeting dead-time specs

Shown below is a schematic with two MOSFET devices, the one on top is called high-side, and the one on the bottom is called low-side. The gates of each MOSFET are pulsed at complimentary times:

MOSFET timing, schematic
Simplified MOSFET layout

The four types of analysis that the Polas tool offers for power MOSFET devices include:

  1. Rdson
  2. EM/IR drop
  3. Timing Delay
  4. Cross-talk

IC layout creates parasitic RC values, and these most be accounted for during each analysis because they impact how the device performs in trying to meet all of the specifications. Polas uses some clever technology to account for IC layout and parasitics:

  • A field solver to account for irregular polygons
  • Split MOS for the most accurate interconnect resistance
  • Fast-mode Rds(on) and IR drop analysis without dynamic circuit simulation
  • SPICE-based simulation mode for Rds(on) and IR drop analysis

In just a few minutes you can get a very detailed report of Rdson effective resistance with Polas:

Rdson Effective Resistance

EM/IR drop analysis uses a color gradient (red – large drop, blue – small drop) to show the weak and strong spots of your layout, so the design engineer can communicate to the layout designer on which areas need to have wider metal or improved vias and contacts:

EM IR drop analysis

Timing delay analysis uses a circuit simulator called ALPS, and colors show how the delay changes across the IC layout view:

Timing Delay Analysis by Circuit Simulation

Cross-talk is the capacitive coupling of adjacent IC layers, so the analysis shows the engineering team all of the coupling nets and their capacitive coupling so that better layout decisions can be made to minimize the effect. Here you can see the cross-probing between the coupling list reported on the left, with the layout on the right:

Cross-probe of Coupling Capacitance

Setting up and using Polas is straight forward, you’ll just need the PDK files from your foundry and the IC layout, so it fits nicely into your design flow. Some other unique features with Polas are support of:

  • Tandem MOS devices
  • Different widths and types in the layout
  • Pre-drivers for power MOS
Supported Features

Customer Case 1

A Polas user doing charger power MOS designs with a BCD technology and they needed to do pad location analysis and optimization. With this tool they were able to calculate the Rdson values and current density. Next, they optimized the pad number and location, meeting cost and performance goals. Here’s a table showing their Pad number, Rdson value, top metal resistance and the maximum current densities:

Customer Case 2

In this scenario engineers were designed a vehicle power MOS on a BCD process, and they ran the timing analysis to help optimize their layout to meet timing specifications. The good news is that it took only one pass of silicon to get their results. Notice that this layout uses 12,000 fingers.

Vehicle – Power MOS, BCD Process

Summary

Industrial, telecom, cloud computing and automotive designs are all using power MOSFET chips, and with their unique environments come challenging design requirements. It’s now possible to use EDA tools like Polas from Empyrean to quickly and accurately analyze Rdson, EM/IR drop, timing delays and cross-talk effects.

Related Blogs


Webinar Replay – Insight into Creating a Common Testbench

Webinar Replay – Insight into Creating a Common Testbench
by Tom Simon on 06-04-2020 at 6:00 am

Common Tesbanch

These days the verification process starts right when the design process begins, and it keeps going well past the end of the design phase. Simulation is used extensively at every stage of design and can go a long way to help validate a design. However, for many types of designs, especially those that process complex data streams, emulation has to be used to ensure proper operation. In a recent webinar Aldec not only discusses the limitations of simulation-only verification for ASICs and large FPGAs, they also help show how it is possible to create common testbenches that are applicable to emulation as well to improve efficiency and help in problem diagnosis.

In the webinar titled Common Testbench Development for Simulation and Prototyping, Alexander Gnusin goes into great detail about the reasons for using a common testbench between simulation and emulation, and then he dives into the specifics of how to make it happen.

Given the relatively slow speed of simulation, emulation is the only way to run enough cycles to ensure that large designs operate properly with large frame sizes. Similarly, simulation time increases as more of the system is included in the scope. Real hardware environments also differ from simulations, so it is important to factor this in as well. Lastly, as reliable as synthesis and STA are, it is imperative to simulate at the gate level to ensure the hardware implementation is correct.

Yet, there are a number of issues that must be dealt with to enable emulation. It is necessary to deal with fundamental issues, such as the cycle based I/O in hardware designs. Cycle based I/O stimulus can overload emulator interfaces. Alexander discusses eliminating this issue by adding extra code in the hardware domain to eliminate the need for cycle based communication. Also, he mentions lowering the overall transaction frequency. Alexander talks about adding synthesizable verification components, such as drivers, monitors, responders and checkers.

The webinar takes time to discuss how common test benches should be set up. Alexander goes through the coding steps using Aldec’s HES DVM, which is a hybrid verification platform. Aldec uses SCE-MI for function based transaction level co-emulation technology. The common testbench will have two top levels – an HDL top and an HVL top. The HVL level uses SystemVerilog and optionally UVM. The HDL level uses SystemVerilog RTL with extra SCE-MI2 compiler features.

There are some helpful coding enhancements for emulation available from SCE-MI2 compiler features. Among these are implicit state machines, use of shared registers, clock and reset generation templates, use of hierarchical read access, writing and reading of MEM arrays and File I/O. The webinar provides examples of each of these to help better understand what is offered.

Alexander covers some useful techniques that can improve common testbench effectiveness. The LFSR-based seed-programmable randomization techniques can reduce connection load and provide useful stimulus. Also, He suggests using synthesizable FIFO-based scoreboards for datapath checking. To verify the equivalence of packets or data chunks, Alexander suggests compressing them to short signature using CRC or FCS methods and perform the datapath checking just on those signatures. End of test statistical verification can be based on the comparison of configurable counters values in the design and verification components.

Aldec’s HES DVM allows for optimization to improve speed though several methods that Alexander discusses in the webinar. Once the common testbench has been assembled Aldec’s Riveria Profiler can provide estimates of the maximum emulation speed up.

There is not space here to go through the whole process, but the remainder of the webinar lays out the common testbench development process and then goes through a design example. This detailed and informative webinar is available for replay on the Aldec website.

 

 


Tesla Driving on the Edge

Tesla Driving on the Edge
by Roger C. Lanctot on 06-03-2020 at 10:00 am

Tesla Driving on the Edge

It’s happened again. Yesterday, on a highway in Taiwan, a Tesla Model 3 plowed into the trailer of an overturned truck. Accounts of the event suggest the vehicle’s Autopilot system was engaged. The failure of the system to perceive the danger ahead and avoid the inevitable collision suggests an “edge case” scenario – wherein the Autopilot system encounters a circumstance that has not previously been encountered or anticipated in its algorithms.

The video is HERE.

The expression “edge case,” like collateral damage, is a euphemistic way of describing a potentially life-threatening driving situation. Usually, edge cases are unfamiliar objects in the roadway or familiar events that are difficult for on-board computers to interpret due to environmental interference such as lighting, weather, or occlusion from other vehicles.

The crash in Taiwan is reminiscent of multiple previous Tesla Autopilot-engaged crashes that have included driving under trailers and driving into emergency responder vehicles parked in travel lanes. The crash in Taiwan is notable for several reasons including:

  • The Tesla Model 3 completely fails to perceive that the lane in which it is driving is completely blocked by the overturned truck in its path.
  • The Tesla Model 3 completely fails to perceive the truck driver (visible in the video) who has walked forward against the direction of traffic in order to warn oncoming cars away from his overturned vehicle. The truck driver is clearly shown dodging the oncoming Tesla.
  • It is possible that the shrubbery mounted atop the highway barrier on the left side of the lane has interfered with the Model 3’s radar – causing the perception failure.
  • Bright sunlight, coming from the driver’s left, does not appear to be a factor – or should not have been a factor.

Who should be worried?

  • Tesla owners with Autopilot activated should be concerned. It is not at all clear that this is an easily corrected weakness in Autopilot.
  • Regulators should be concerned, as this may or may not be a flaw in Autopilot’s ability to protect drivers. It may require action.
  • General Motors, Toyota, Audi, Nissan, et.al. Multiple competing car makers are developing or have already launched their own Autopilot equivalents – such as GM’s Super Cruise – offering “supervised” automated driving. As these companies ponder stepping up to un=supervised Level IV operation, they will do well to take into account the performance of Tesla Autopilot in this circumstance.

What can be done?

This latest Tesla crash highlights the importance of vehicle-to-cloud/infrastructure connectivity. The existence of traffic camera video to capture the event in question thereby facilitating forensic assessment of the event implies the ability of traffic monitoring authorities to use existing video to warn drivers of dangerous traffic conditions in real time.

Companies such as Savari Network, Haas Alert, Notraffic, and TrafficLand are all working on leveraging wireless connections and traffic camera and other resources to alert drivers to danger ahead. It is not clear from the video how much time has transpired from the truck crash to the Tesla crash, but there was enough time for the truck driver to exit his vehicle and walk down the road to warn oncoming drivers. That ought to have allowed enough time for traffic monitoring systems or personnel to have sent either an automated or manual alert to all drivers headed toward the event.

Those alerts or warnings could have been delivered via telematics service providers, smartphone apps, or embedded vehicle systems with in-dash navigation. The bottom line, existing cellular and camera-based highway monitoring systems could have been used to help avoid the unfortunate demise of yet another Tesla operating on Autopilot. The technology exists to solve these kinds of problems today.


Free Webinar on Verifying On-Chip ESD Protection

Free Webinar on Verifying On-Chip ESD Protection
by Tom Simon on 06-03-2020 at 6:00 am

Promo Ad 400x400 1

Walking across a carpet can generate up to 35,000 volts of static charge, which is tens of thousands of times higher than the operating voltages of most integrated circuits. When charge build up from static electricity is exposed to the pins of an IC, the electrostatic discharge (ESD) protection network on the chip is intended to harmlessly shunt the current to ground. Decades of design experience have taught us how to design these protections. However, if the ESD protections are not properly implemented on chip, or if there is a design flaw, such as incorrect design parameters, a chip can fail in the field. ESD related failures can be instantaneous when they are caused by things like a device burnout, or they can be slow when they are caused, for instance, by electromigration (EM).

An ESD failure in the field can be expensive and can even lead to safety issues, depending on the end product application. The best way to ensure that ESD protections, as implemented, are going to prove effective is to verify them in layout prior to tape out. Waiting until testing can waste time and money, and lead to difficulties in identifying the root cause of the problem. Fortunately, there is a way for IC designers to verify the ESD protections on a chip once the layout is available.

Magwel offers its ESD protection network verification tool ESDi to rapidly detect a wide range of design and implementation issues that can lead to ESD failures. ESDi can check for unprotected pins, missing vias, missing or undersized ESD devices, high bus resistance and more. It also uses parallel processing to quickly simulate HBM events on all or a subset of the pin pairs on a chip. The simulation uses TLP models and can accurately model snap back behavior. It can also predict competitive triggering of multiple ESD devices in a single ESD event. ESDi will report current density and EM violations.

Error and violation review is made easy with an advanced user interface for filtering, sorting and selecting errors to view in detail and visualize their locations. False or missed errors are dramatically reduced by using simulation for all tests, instead of having the user pick and choose what potential issues need simulation.

To give a firsthand look at how ESDi is set up and used, Magwel is offering a free webinar on Tuesday June 9th at 10AM Pacific Time. In this webinar Magwel Application Engineer Allan Laser will provide an overview of the tool’s features followed by running a sample design so viewers can better understand each step of operation.

ESDi can be used at the block or chip level. It has its own simulator specifically optimized for ESD event modeling and also has a high accuracy solver-based extractor for use on the design layout. ESDi also has automated and simplified the process of ESD device identification.

ESDi is remarkably effective because its development was based on customer input. The webinar will provide a glimpse into the many large and small features in the tool that make it the choice for many leading semiconductor companies. Registration for the webinar replay of Magwel’s ESDi is available here.

About Magwel
Magwel® offers 3D field solver and simulation based analysis and design solutions for digital, analog/mixed-signal, power management, automotive, and RF semiconductors. Magwel software products address power device design with Rdson extraction and electro-migration analysis, ESD protection network simulation/analysis, latch-up analysis and power distribution network integrity with EMIR and thermal analysis. Leading semiconductor vendors use Magwel’s tools to improve productivity, avoid redesign, respins and field failures. Magwel is privately held and is headquartered in Leuven, Belgium. Further information on Magwel can be found at www.magwel.com


Feature-Selective Etching in SAQP for Sub-20 nm Patterning

Feature-Selective Etching in SAQP for Sub-20 nm Patterning
by Fred Chen on 06-02-2020 at 10:00 am

Feature Selective Etching in SAQP for Sub 20 nm Patterning

Self-aligned quadruple patterning (SAQP) is the most widely available technology used for patterning feature pitches less than 38 nm, with a projected capability to reach 19 nm pitch. It is actually an integration of multiple process steps, already being used to pattern the fins of FinFETs [1] and 1X DRAM [2]. These steps, shown schematically in Figure 1, allow lines originally drawn 80 nm apart to generate lines which are ultimately 20 nm apart (effectively 10 nm resolution). This is important, as it is well beyond the resolution of any high volume lithography tool, including EUV (13 nm resolution) [3].

Figure 1. SAQP process flow.

Feature grouping
The process naturally categorizes features into three groups: core, shell, boundary (Figure 2)[4]. The shell features naturally form loops which need to be cut. Likewise, the boundary constitutes a mesh which also needs to be separated into segments. Consequently, the SAQP process must conclude with lithography steps which cut or trim previously defined shell and boundary features. By comparison, the older SADP process only has two groups, core and boundary [5].

Figure 2. Separation of SAQP-generated features into core (C), shell (S) and boundary (B) categories. The green indicates the second spacer. The core and boundary features are expected to be made of the same material, while the shell feature is made of a different material.

An alternative SAQP process
Under an alternative SAQP process flow (Figure 3), the shell feature is actually the remnant first spacer material, while the core and boundary are a different material, either the substrate or a gapfill material. Hence, they are indicated by different colors in Figure 2. The fact that they are different materials suggests that they can be selectively etched. This enables some opportunities for hard-to-do patterning [6].

Figure 3. Alternative SAQP process flow, where instead of gaps, different materials fill different regions.

Allowing a prohibited combination
A particularly handy application is the combination of minimum pitch and 2x minimum pitch features. Such a combination is generally forbidden in a single exposure with k1<0.5 [7]. A particularly prohibitive combination would be lines at minimum pitch, with breaks at 2X minimum pitch (Figure 4, left). The diffraction pattern of the line breaks is a much weaker signal than that of the lines themselves, since they occupy a much smaller area. It also degrades faster with defocus [7]. Such a combination also cannot be fixed with assist features [8], as there is no room to insert them for minimum pitch lines. On the other hand, with selective etching, a mask feature is allowed to cross the intervening line in the middle (Figure 4, right). This greatly simplifies the cutting and avoids the edge placement errors that may arise with separated cuts at the two locations [6].

Figure 4. Left: The line pitch and the line break pitch (=2X line pitch) are not compatible. Right: The incompatible pitches may be combined with the assistance of selective etching. In this case, only the material in the blue areas is etched; the red areas are not affected by this mask.

More tractable multipatterning
With selective etching, the use of three masks is a must – one is needed to define the separated A/B regions, a second one is used with A-selective etch, and the third one with B-selective etch. However, selective etching (in combination with SAQP) also makes multipatterning more tractable by allowing more tolerance in overlay and the minimum number of masks to allow the combination of minimum line pitch and line breaks spaced at double the minimum line pitch.

References
[1] https://spie.org/news/6378-self-aligned-quadruple-patterning-to-meet-requirements-for-fins-with-high-density?SSO=1

[2] https://www.techinsights.com/blog/samsung-18-nm-dram-cell-integration-qpt-and-higher-uniformed-capacitor-high-k-dielectrics

[3] https://www.asml.com/en/products/euv-lithography-systems/twinscan-nxe3400c

[4] T. Ihara, T. Hongo, A. Takahashi, C. Kodama, “Grid-based Self-Aligned Quadruple Patterning Aware Two Dimensional Routing Pattern,” 2016 Design, Automation & Test in Europe, p. 241.

[5] K. Nakayama, C. Kodama, T. Kotani, S. Nojima, S. Mimotogi, S. Miyamoto, “Self-Aligned Double and Quadruple Patterning Layout Principle,” Proc. SPIE 8327, 83270V (2012).

[6] A. Raley, N. Mohanty, X. Sun, R. A. Farrell, J. T. Smith, A. Ko, A. W. Metz, P. Biolsi, A. Devilliers, “Self-aligned blocking integration demonstration for critical sub-40nm pitch Mx level patterning,” Proc. SPIE 10149, 101490O (2017).

[7] https://www.linkedin.com/pulse/forbidden-pitch-combination-advanced-lithography-nodes-frederick-chen/

[8] J. G. Garofalo, O. W. Otto, R. A. Cirelli, R. L. Kostelak, S. Vaidya, “Automated layout of mask assist-features for realizing 0.5 k1 ASIC lithography,” Proc. SPIE 2440, 302 (1995).

Related Lithography Posts


Arm Reinforces the Mobile Fortress

Arm Reinforces the Mobile Fortress
by Bernard Murphy on 06-02-2020 at 6:00 am

Arm Mobile 2020 Announcement

Arm did it again. They continue to press their advantage, most recently with an announcement on their 2020 release of cores for mobile applications, in Cortex-A, in what they now call Cortex-X custom cores, in Mali GPUs and in the next generation of their Ethos neural net core.

Paul Williamson, VP GM of the client line of business, presented this release as necessary to keep up with the digital immersion that is rapidly becoming central to every aspect of our lives, from remote meetings, to education, to telehealth, to staying in touch with family and friends. For most of us, that interaction is through our phones. And of course, machine learning is becoming an integral part of that experience, for all the usual reasons – clever image processing and bio-sensing – but also for better managing the device, in power management and behavioral security.

Cortex-A78 is the latest high-end CPU, offering 20% higher performance in the same power envelope as A77 for a multi-day battery life, and in a 15% smaller area footprint over A77 in an octa-core cluster with A55.

Paul also announced the Cortex-X program, a way for partners to work with Arm on the customization option they introduced recently. Rather than a “here’s a way to add your custom instructions, good luck” approach, Cortex-X is a collaborative program in which partners work with Arm to develop differentiated and proven solutions with maximum performance as the primary goal. The first core in this class, developed with one or more partners (Paul wouldn’t elaborate) is called the Cortex-X1 and provides 30% sustained performance over the previous generation.

In graphics, Paul introduced the Mali-G78, with 25% better performance over the previous generation, support for up to 24 cores, new technology to improve scalability and reduce energy consumption and 30% reduction in energy for a key math unit, lowering total power consumption. There’s a big emphasis here in enhancing mobile gaming in rendering complex scenes like smoke, grass and trees. Arm has also put more into machine learning performance on the GPU, showing an average 15% performance increase over a variety of benchmarks.

Interestingly, Arm aren’t only pushing the premium experience in these devices. They have also introduced a Mali-G68 core for phones in the sub-premium tier, supporting all the features provided in the G78 but scaling up to 6 rather than 24 cores. Which should make it a lot easier on OEMs to support common software and a common experience against a range of phones. Sub-premium users can also play Fortnite, just a little slower.

I for one was happy when Arm introduced their first true neural net core in Ethos. It was a natural for them to have an offering in that space, and this also looks like it will be a dynamic player in the line-up. The second-generation core in the 2020 announcement is the Ethos-N78, offering double the MAC capacity, new compression technology to further reduce off-chip memory accesses per inference, along with more than 90 different ways to configure the core. Stats show 2X performance using full MAC capacity versus N77 and more than 40% DRAM bandwidth efficiency.

All in all, the king just took a big jump forward in mobile support. Arm was always a good solution provider, competition is making them better.


TSMC Pushes out Equip Purchases – SIA and SEMI ask for Government Help

TSMC Pushes out Equip Purchases – SIA and SEMI ask for Government Help
by Robert Maire on 06-01-2020 at 9:00 am

TSMC China

TSMC pushing out equipment purchases
Covid/China trickles down to chip industry
SIA and SEMI ask for financial/govt help to keep up
The beginning of another down cycle?

We have heard from a number of sources that TSMC has started to push out equipment orders as concerns grow about the second half of the year.

Right now is the most logical time for TSMC to hedge their bets as they have rolled out all the needed/important purchases to get 5NM finished and get manufacturing up to speed to support their number one customer, Apple, in its launch of the Iphone 12 in the fall.

As we have pointed out many times, in a yearly, repeating pattern,TSMC starts ordering equipment in Q4, installs in Q1/Q2 to be ready for the Q3 Apple push. Any equipment ordered or due to be shipped now won’t help with the Iphone launch so its a good time to put the brakes on.

Covid & China trickle down to production
We have been saying for several months now that the second half of the year would get ugly as the double impact of Covid and China trade issues (Huawei) trickle down to demand and production of chips. The work at home economy won’t offset 25% unemployment and Huawei getting cut off. An initial surge of demand related to servers and laptops needed for work at home is more of a one time event rather than a sustainable, permanent increase.

Almost every equipment company we listened to on recent earnings calls was very cautious and unwilling to talk about the second half as everyone seems to agree, in an unspoken manner, that things will get worse.

Perhaps a bigger question is wether the memory industry will also slow down along with the foundry industry. There is obviously more sustainable demand for memory for servers and cloud based applications but will consumer slowing offset that?

Given that the memory industry is always a delicate game of supply/demand balance we think the current global events will likely drop demand below a critical level needed to support current pricing. Perhaps not across all memory types but enough to weaken pricing and thus slow spending again. We are not that far out of a sharp memory downturn which followed an unusually strong memory up cycle. We could be returning to more “normal” memory cyclicality.

WSJ reports that US chip makers are asking for help
The Wall St Journal, which has been paying a lot more attention to the chip industry, reported over the weekend that the US chip industry is asking for help in the face of foreign competition in the chip industry. Semiconductor Industry to Lobby for Billions to Boost U.S. Manufacturing

The semiconductor industry association (SIA) is looking for $37B in help to support the US industry. SEMI, the trade group for equipment and materials suppliers, is lobbying for tax credits rather than a direct cash handout.

This is of course coupled with Intel’s CEO, Bob Swan, directly lobbying Washington with plans for a US fab/foundry in competition to what TSMC is doing.

We hope that these proposals are taken seriously and moved on rather than ignored or watered down. As we have pointed out before, it is clear that semiconductor chips are more important than selling more soybeans (even though farmers may be in election battleground states and chips makers on the “left coast”).

China is pouring tens of billions of dollars, as quickly as possible into the crucial semiconductor industry and will obviously re-double efforts given the current debacle. Meanwhile, the US semiconductor industry has been exporting both technology and production.

These aid efforts aren’t likely to happen any time soon and may not happen until its too late but its certainly a good start, much like the US’s efforts with TSMC.

We think the odds of an investment tax credit are likely pretty good. Direct cash infusions are likely more difficult to get but the $10B or so needed for a bleeding edge fab is small change and may fit in existing defense budgets
The Stocks

We are likely to see a fall off but perhaps not as much in the near term as there has been so much positive momentum in the stocks and investors may not believe the weakness. We would expect the stocks to more fully capitulate in July when second quarter numbers are announced and the effects of the slow down will show up in black and white.

Although most in the industry are not giving official guidance, they have been doing everything but. They will have to more openly talk about the slowing orders.

Its clear that while TSMC may be the first to slow down, they will not be the last as these things are always industry wide, its just that some companies are smarter and react faster.

We also think that sooner or later the full impact of the Huawei issue will be felt as alleged “loopholes” get closed and the impact grows.

We think Applied has a lot of exposure to TSMC as does KLAC. ASML may be less impacted as most of the demand for 5NM related EUV tools has mostly been shipped. Lam is usually the memory “poster child” and also may have less near term impact if Samsung continues to spend.

Semiconductor Advisors

Semiconductor Advisors on SemiWiki


Tortuga Logic CEO Update 2020

Tortuga Logic CEO Update 2020
by Daniel Nenni on 06-01-2020 at 6:00 am

Jason Oberg

We started working with Tortuga Logic two years ago beginning with a CEO interview so it is time to do an update. The venerable Dr. Bernard Murphy did the first interview with Jason which is worth reading again, absolutely.

Security is also one of the vertical markets we track which has been trending up for the last two years. In looking at the analytics Tortuga Logic has had a great couple of years as well but first let’s start with Jason’s official biography from the Tortuga website:

“Dr. Jason Oberg is Chief Executive Officer and co-founder of Tortuga Logic, where he is responsible for overseeing the company’s technology and  strategic positioning. Dr. Oberg works closely with the Tortuga Logic team to facilitate capital, partnerships and revenue on all products and services. As a leading expert in hardware security, Dr. Oberg brings years of intellectual property and unique technologies to the company. His work has been cited over 700 times and he holds six issued and pending patents. He received his B.S. in Computer Engineering from UC Santa Barbara and an M.S. and Ph.D. in Computer Science from UC San Diego.”

Where did the company name come from?
The company was formed out of decades of hardware security research at UCSD and UCSB and we wanted to incorporate something aquatic (because both universities are on the ocean) and with something that represents protection and security. Tortuga (spanish for Turtle) was the conclusion because they live in the ocean and have a secure shell (you can see this on our logo). We of course chose Logic because we work closely with hardware. Hence Tortuga Logic was born.

Tortuga Logic is at a unique intersection of cybersecurity and hardware design. What security weaknesses is your company addressing?
Tortuga Logic is focused on identify digital issues in modern ASIC, SoC, and FPGAs that are either weaknesses in the logical design itself or the system firmware executing on the system. In general, the types of weaknesses we cover make up the majority (80%) of the existing hardware Common Weakness Enumerations (CWEs) list as maintained by MITRE.

What markets have the most at stake from a hardware security vulnerability?
Security is all about risk reduction, so the markets that have the most at stake financially are the ones that are the most sensitive to preventing hardware security vulnerabilities. From a semiconductor market perspective, a hardware vulnerability influences the security of the entire end system, so you must think vertically about the impact of hardware vulnerabilities.

That said, we see IIoT, Automotive, and Datacenters as being among the markets at the highest risk from a hardware vulnerability. These markets have felt the pain of recent hardware vulnerabilities in Bluetooth Low Energy IoT devices, Microarchitectural side channels in large application processors, and decentralized platform security in the datacenter to name a few. Aerospace/Defense is also a very important and sensitive market to hardware vulnerabilities, with the lowest tolerance for risk. Much of our technology has been DoD funded so there is a keen interest there.

What is driving the increase in hardware security vulnerabilities?
We really see 3 key factors contributing to this: 1) Modern SoCs are becoming increasingly more complex hardware software systems, 2) There’s been a surge of awareness around the ability to break into entire systems by finding hardware vulnerabilities, 3) Root of Trust initiatives, while extremely important and fundamental to building a secure system, are filled with mistakes primarily due to item (1).

Interestingly enough, as more focus is put into building security features deeper into hardware, the more attackers are focused on breaking them. They know if they can break the hardware barrier, they can then break into the system. Unfortunately, this is getting easier to accomplish given semiconductor devices are becoming so complex in both gate count and firmware.

How do does one place value on a security product, is it not like insurance?
Insurance really isn’t the right word, because security companies are not paying out claims after a vulnerability is found. That said, it is about financial risk reduction and being able to effectively measure the investments made against the reduced risk. Doing nothing puts you at the highest risk. If the cost of a vulnerability is extremely low, then doing nothing is probably fine because the financial risk is very low. However, the vast majority of markets the semiconductor market serves does have very high risk and thus investments in security do show measurable reduction in that risk.

Are there industry initiatives driving hardware security and how do you see them playing out over the next couple of years?
There are some very important initiatives that have recently started, and I highlighted one of them at the beginning of the interview. Specifically, MITRE in late February announced a taxonomy of common hardware weaknesses. The Common Weakness Enumerations (CWEs) have been used extensively by the software community to effectively classify the most impactful software weaknesses.

This new release 4.0, driven initially by Intel and MITRE with contributions from our security team at Tortuga Logic, allows for effectively capturing the most impactful hardware weaknesses. This is an important initiative because it will allow the industry to more transparently state what are the highest impact hardware weaknesses and suggested mitigations. This will allow everyone to build more secure systems and provide more transparent techniques for measuring effectiveness.

About Tortuga Logic
Founded in 2014, Tortuga Logic is a cybersecurity company that provides industry-leading solutions to address security vulnerabilities overlooked in today’s systems. Tortuga Logic’s innovative hardware security verification solutions, Radix™, enable System-on-Chip (SoC) and FPGA design and security teams to detect and prevent system-wide exploits that are otherwise undetectable using current methods of security review. To learn more, visit www.tortugalogic.com or contact info@tortugalogic.com.

Also Read:

CEO Interview: Robert Blake of Achronix

Flex Logix CEO Update 2020

CEO Interview: Jason Xing of Empyrean Software


Misunderstanding the Economic Factors of Cybercrime

Misunderstanding the Economic Factors of Cybercrime
by Matthew Rosenquist on 05-31-2020 at 6:00 am

Misunderstanding the Economic Factors of Cybercrime

A new study by Cambridge Cybercrime Centre titled Cybercrime is (often) boring: maintaining the infrastructure of cybercrime economies concludes that cybercrime is boring and recommends authorities change their strategy to highlight the tedium in order to dissuade the growth of cybercrime.

Warning: Full-blown rant ahead, as I am frustrated with reports such as this!

Limited focused research, which does not look at the big picture as it evolves, leads readers to poor conclusions that are oversimplified and not couched in reality.

Do these researchers really think that cybercrime is driven by motivations about it being sexy, a fun work environment, or exciting? This report suggests that if we market cybercrime roles as being tedious, then people will not go down that path. Ha!

Wake up! The vast majority of cybercrime is motivated by personal financial gain. Period. Additionally, the massive number of new followers of digital crime won’t care about tedium or the opinions of people that live a lifestyle where convenience plays a significant role in how to put food on the table.

Throughout history organized crime has aligned to a pyramid model where the greatest number of participants are at the bottom, doing grunt jobs. They are poorly compensated, take on more risk, terribly treated, and generally suffer in their daily grind. Most don’t aspire to be there, rather they do it because there are not better options.

This report misses the bigger picture!
Consider that one million people join the Internet every day. The majority of the next billion that will come online will be from economically struggling regions where people hustle to scratch a living every day. Unemployment is high and there are almost no opportunities to make money. Half the world makes less $10 a day and over 10% live on less than $2 a day. Even a basic job as a mule, social engineer, CAPTCHA reader, ransomware distributor, phishing scammer, etc. will make many of these people more money than they could otherwise. The people in warehouses that support click-farming, earning pennies, aren’t there because they want to be. They simply don’t have many options to earn a wage. They do what is necessary to subsist. Much of the next billion people joining the internet will see connectivity as a doorway for more opportunities to stay afloat.

Unfortunately, cybercrime will see an explosion over the next few years as people with the greatest needs see the Internet as an opportunity to sustain their family. Some estimates are as high as $6 trillion in overall impact. Cybercrime-as-a-Service is positioned for tremendous growth as it allows for people to join the support base of online criminal groups, without any requirements for hacking skills. The pay is low and the work is grinding, but the rewards may far exceed what is available to them otherwise. It does not matter if law enforcement communicates that such roles are boring for the majority of those joining the bottom ranks.

Discussions from people, in economically wealthy countries, about tedium is irrelevant and myopic when the greater scale is evaluated. For many millions of people, cybercrime will be an avenue for subsistence.  For these people, the economics of survival and scarcity of alternative opportunities will drive decisions. This is the realistic risk we must address.

Image by Colin Behrens from Pixabay


Time for Chip Diplomacy

Time for Chip Diplomacy
by Terry Daly on 05-29-2020 at 10:00 am

image 4

An industry caught in the crosshairs of geopolitics needs global emeritus leadership

The semiconductor industry is at the epicenter of great power politics. An ascendant China is on a quest for a unified global system with China as the leading power. The United States seeks to maintain its position as leader of the liberal democratic order and arbiter of the global economy. The flash points span trade, human rights, national security, and digital technology leadership. Can chip firms protect decades of investment and navigate access to China’s lucrative market under increasing US constraints?

The semiconductor industry and China are deeply integrated. As China captured roughly 50% production share of global electronics, chips became its leading import. China became “coupled” through its position in the global supply chain. Chip companies beat a path to China’s door for access to its fast-growing indigenous market and multi-billion-dollar subsidies. Intel, ARM and AMD formed joint ventures. Samsung, SK Hynix, TSMC, UMC, GF and Intel built 300mm fabs on the mainland. A vibrant Chinese communications sector (Huawei, ZTE, Xiaomi, Oppo, Vivo) consumed high volumes of chips from Qualcomm, Broadcom, Qorvo, Skyworks, Micron, and others. HiSilicon leveraged IP and design services from the industry and manufactured its chips in Taiwan using TSMC’s leading-edge technology. China established venture funds to invest in global firms and sent legions of engineering students to foreign universities. Chinese firms joined Industry Associations (SIA, GSA, SEMI, IEEE) to build relationships, acquire IP and influence standards. China was projected by SEMI (pre-COVID) to become the largest market for semiconductor equipment suppliers by 2021, powered by its “Made in 2025” strategy and push for self-sufficiency in chip production.

But there was a long-standing undercurrent of abusive business practices by China including IP theft, forced technology transfers, “pay-to-play” schemes and disregard of WTO obligations. The US pressed for remediation by leveraging tariffs, Huawei 5G security concerns and CFIUS expansion. Then, tragically, came COVID-19. Calls for “de-coupling” grow in frequency and volume. But modifying supply chains in a capital-intensive industry is not simple. Fabs are rarely “relocated” and replacing capacity is expensive. Moving design centers leads to dismantling high-performance design teams central to product development and customer relationship management. Firms face the loss of billions of dollars in revenue and profit along with significant market share by being blocked from access to China’s demand.

How can a firm best optimize shareholder value in this environment? The default of complying with US policy carries potentially severe economic and shareholder value impact. A recent BCG study highlights a loss of 37% in revenue and 18-points in market share for US firms in a 100% de-coupling scenario. Firms can exit non-strategic investments, migrate asset-lite missions to alternate geographies and modify plans for future capacity (e.g. TSMC in Arizona). They can change country of incorporation through merger with a foreign entity (inversion) or divest assets to a “friendly” foreign company with broader freedom of action to address China’s market. These strategies must survive oversight of the Foreign Investment Risk Review Modernization Act (enhanced CFIUS); each carries economic and political risk.

Global emeritus leaders (CEOs, CTOs, academia, government) and Industry Associations must play a more effective role in protecting the interests of the semiconductor industry.  Advocacy efforts to-date have fallen short. A public “chip diplomacy” initiative could marshal respected industry emeriti to leverage their global networks and build a compelling picture of the benefits of an open and vibrant industry model. They could help parties envision the benefits of applying the emerging technologies of AI, IoT, blockchain, 5G and quantum computing to shared societal issues such as hunger, climate, and health. They could provide a technology roadmap enabling US-China commercial collaboration in space and sea exploration. Technology solutions to safeguard critical IP, reduce cyber threats and verify end-application use of commercial technologies could be defined. Revisions to the Wassenaar Arrangement governing export controls could help both countries address concerns. Confidence building measures and multilateral support (South Korea, Japan, EU) are critical.

Industry Associations could sponsor a series of “chip war games” to underscore the adverse outcomes of sequential escalation. One scenario might find China withholding shipment of precious metals, revoking licenses to do business in China, threatening market access to South Korea, Japan and other South East Asian countries and nationalizing the semiconductor assets of foreign companies. It could threaten military occupation of Taiwan and control of its vibrant chip sector. The US might in turn sequentially terminate visas for Chinese students, revoke licenses from Chinese companies doing business in the US and expand tariffs on a range of Chinese imports.  It could supplement the denied parties lists and broaden the foreign direct product rule (a.k.a. Huawei chip ban) into a full-throated technology embargo, including semiconductor equipment, licenses to design tools and IP and the shipment of chips, denying China access to the lifeblood of its digital economy.

The case for de-escalation is clear. The desired semiconductor industry model is one of open markets, free trade, IP protection, full leverage of geographic competencies, innovation by a multi-cultural workforce, global collaboration in research and standards, all funded by private equity and efficient global capital markets. The US and China need to walk back from the precipice, stand-down the acerbic rhetoric and resolve issues underlying the battle for digital technology supremacy. Perhaps a truce on chips and a “co-opetition” model can pave the way for progress on the other bi-lateral flash points. A tall order, but a preferable outcome to mutually assured destruction of the industry and our nations.

Individual companies must take actions required to protect and optimize shareholder value. But if there were ever a time for global semiconductor emeritus leaders and Industry Associations to cash in on their decades of well-earned global relationships and activate chip diplomacy, it is now. We need both the US and China at the table. The stakes are too high to stand idly by and let the chips fall where they may.

Terry Daly is a retired semiconductor industry executive