webinar banner2025 (1)

Our US chip foundry comments confirmed by WSJ

Our US chip foundry comments confirmed by WSJ
by Robert Maire on 05-11-2020 at 10:00 am

Trump Intel TSMC China TAiwan

-Could GloFo come back?
-TSMC or Intel or both or neither?
-Samsung would be a long shot?
-Perhaps Apple could convince TSMC?

The Wall Street Journal put out an article that detailed what we had indicated in our newsletter 10 days ago, that the US government is looking at getting a US based foundry to protect our interests given our increasing dependence on Asia (read that as TSMC).

We had said ” We would also not be surprised to see some sort of US based foundry effort that TSMC could be part of. Maybe joint with Intel.”, (May 1st)

The WSJ said yesterday “Trump administration officials are in talks with Intel Corp., the largest American chip maker, and with TSMC, to build factories in the U.S., according to correspondence viewed by The Wall Street Journal and people familiar with the discussions.”, ( May 11th)

The increasing need to do something increases pressure
Covid put a very sharp point on what we and many in the chip industry have been talking about for several years, and that is the US’s increasing dependency for chip production outside of the US. While the increasing security threat has been slowly building as China takes over the South China Sea, steamrolls Hong Kong and increases its influence around the globe, the Covid crisis brought the vulnerability to an immediate head by showing that international trade in chips can be cut off by other risks as well.

The fact that the administration continues to ratchet up pressure on China in semiconductors and now is pushing talks to get a US foundry….all done during the Covid pandemic….show exactly how urgent, important and serious the administration is about the semiconductor issue. This also suggests that the administration is dead serious about the embargo and cutting off Huawei and military use of US chip technology.

We don’t think the June 28th rule implementation will get delayed and we also think loopholes will get closed.

If US pressure doesn’t convince TSMC perhaps Apple could
We are sure that Apple recognizes the existential threat to its livelihood that losing its chip supply from TSMC represents and likely wants TSMC to make chips in the safety of the US. Maybe the double threat of the US government cutting off equipment and Apple cutting off dollars is moving TSMC to get real about putting a fab in the US.

Although it wouldn’t be pretty or cheap, Apple could move a lot of Iphone and other manufacturing to the US if it really had to. However, it would be much more difficult to move a chip foundry….all the more reason to start now.

What does all this say about Taiwan’s future?
If the US government were not concerned about Taiwan’s future then why does it need to get a bleeding edge foundry located in the US? Maybe the US government sees the writing on the wall….that China will not stop until it gets Taiwan back (by whatever means necessary). If the US has a capable, leading edge foundry then they don’t need a free Taiwan quite as much and won’t be put in a corner if China takes it back.

If we start today, it will likely take five to ten years to get a leading edge fab up and running in the US. Maybe barely running. Look at the GloFo example. They sunk a lot of time and money into Albany only to fail at becoming a leading edge foundry. Intel tried and failed in the foundry business. Not so much for lack of technology but the foundry business is clearly not in Intel’s DNA.
So maybe after ten years of money and work with Intel’s or TSMC’s help or both cooperating we might get a reasonable foundry in the US. Who knows what could happen to Taiwan in 10 years….maybe that’s the point.

Where is Intel in all this?
This could obviously be a big potential win for Intel if (and that’s a very big if) they can get the technology act together again. We think it would be an extremely difficult task for Intel to change stripes enough to be a serious competitor to TSMC.

This suggests that a US foundry effort might likely look like a “copy exact” of a TSMC fab, perhaps with Intel’s cooperation. The WSJ article said that Intel’s CEO Swan sent a message to the Department of Defense on April 28th saying that they are ready to build a foundry for US defense and commercial customers.

Being a foundry and being a CPU IDM are two very, very different animals. As we have said, the DNA is completely different. Making working silicon from someone else’s design is much different and requires not only a different skill set but a much different fab setup.

Its an opportunity that Intel can’t pass up as they have missed the boat previously on the foundry business and may have a better shot if the government, customers, like Apple , and maybe even TSMC support it. If the US government throws money at it, so much the better.

Samsung would be a long shot
Samsung has been a wannabee in the foundry business behind TSMC and its fab in the US in Austin is OK but far from great. We don’t see them as a likely contender to be in the mix but stranger things have happened.

Could GloFo come back from the dead?
GloFo has shut down its advanced R&D and sold off its EUV tools which is essentially like burning the boat after reaching a desert island. To restart the fab in Albany back into a leading edge fab is all but impossible now.

Not only are the tools gone, but the people are gone too. So is the mask shop, packed up and shipped off to Dresden. Its toast.

Where is Apple, Google, Facebook et al?
We could see the tech giants chip in some money as they need the chips and need a secure supply just as much as the Defense Department. This suggests that a foundry in the US could have a ready, willing and able customer base, more than happy to work with them. These customer demands are just as big a threat to TSMC as getting their chip equipment embargoed.

Don’t forget about packaging and the “back end”
If you move a foundry to the US due to risk, it would be stupid to do it without moving some packaging and testing capability as well. After all you wouldn’t want to make the wafers in the US only to have to ship them back to Taiwan to be packaged. Luckily, packaging and test are way easier than building a fab but they are low level businesses , low margin businesses that would be difficult to bring back to the US because of cost. While Intel makes its wafers in the US, they are still packaged overseas.

The Stocks
While all this does not directly impact the stocks in the near term, it does show the seriousness of the potential of an embargo as well as deteriorating relations with China, especially in tech. Things seem to be getting worse, even in the face of stocks going up. In our view, this adds further risk to the business on top of Covid.

Given the recent run up in most of the stocks, this seems to be a bad indicator of future problems and not supportive of higher valuations.

Semiconductor Advisors

Semiconductor Advisors on SemiWiki


KLA – Keep Looking Ahead because we don’t know the future of China & Covid

KLA – Keep Looking Ahead because we don’t know the future of China & Covid
by Robert Maire on 05-11-2020 at 6:00 am

KLAC Covid SemiWiki

-Great quarter & execution with minimal Covid impact
-Wide guide is better than no guide as future is very fuzzy
-Feels like slightly down H2 W/ unknown embargo impact

KLA is virtually unscathed by Covid for now at least
KLA put up a very solid quarter with revenues of $1.424B and Non GAAP EPS of $2.47 versus street of $1.39B and $2.28. The company managed to work through and work around most of the Covid related issues in production and installation. There was some mild weakness in the Orbotech PCB and display which was down 14% Q/Q to $160M as it is a more consumer related business than KLA’s traditional yield management business which is fab facing, which was quite strong with record backlog and shipments.

The “core” KLA business continues to outgrow the overall semi equipment market (albeit at a lower rate) as EUV and advanced foundry demand continues to drive business very strongly

“Wide Guide”
Guidance is for revenues between $1.26B and $1.54B with non-GAAP EPS ranging from $1.81 to $2.87. Foundry is expected to be a huge 51% of business with memory at 39% and logic at 10%. This wider guidance range is certainly better than other companies who aren’t even trying to give guidance and seems to range from little to no impact to significant impact related to Covid.

Demand remains very strong, production is the variable-
Management made it clear that the current guide range is caused almost entirely by production related logistics variability and not end customer demand, which remains very, very solid….for now at least. Management also made it clear that demand going forward was less clear and there seems to be an assumption of softness in H2 but its not at all clear.

No China embargo impact in June Q , but afterwards???
Given that the new China licensing does not take effect until June 28th, the June quarter will have zero negative impact caused by the issue.

In our view, it may be possible to see a slight positive benefit in the June quarter as questionable Chinese customers likely want to get their tools before the June 28th cut off date so they may want tools shipped at all costs, in any condition to beat the deadline. We are sure KLA will oblige whomever it can.

The conditions associated with the new licensing seem at best a great unknown. KLA management is assuming minimal impact as much of its products are produced in Singapore and Israel and thus not “US manufactured”.

We think that loop hole will likely be shut down very quickly, maybe even before June 28th, if anyone in the US government has any clue or pays attention. We find it unimaginable that the US government would allow a loop hole big enough to drive a semiconductor tool through while still keep ASML from shipping an EUV scanner to China. If I were ASML I would be screaming.

We think the more likely scenario is some sort of licensing for anything that contains US technology which almost all of KLA’s Singapore manufacture meets and much of Israel production minus Orbotech.

Also not counted is potential impact on the 51% of business which is foundry (read that as TSMC) which could also be impacted. At the very least the licensing issue could cause confusion related delays which could push shipments out of the September quarter into December quarter.

All this suggests that between the China embargo and Covid demand impact we could start to see the first negative effects in the September quarter, which could make September quarter guide wider or lower or both.

The stock
KLA’s execution was flawless as usual despite the Covid confusion. A bit like the proverbial duck…calm on the surface but paddling like crazy under water. The stock is OK for the near term as the performance was a lot better than it could have otherwise been. This “teflon” like performance likely adds to the attraction of KLA’s market position in the near term.

However, we would remain very, very cautious about the longer term as we get closer to the September quarter which will likely have some China impact as well as early signs of the global economic slowdown as it will undoubtably trickle down to the semiconductor industry demand.

Management was careful about longer term demand and impact prognostication and echoed what we heard elsewhere about likely softness in H2. For now, KLA looks a bit like a fortress in a potentially declining neighborhood facing two plagues of unknown future impact.

We will “Keep looking ahead” to try to determine the impact


Autonomous Cars Reality is Stranger than Fiction

Autonomous Cars Reality is Stranger than Fiction
by Roger C. Lanctot on 05-10-2020 at 10:00 am

Autonomous Cars Reality is Stranger than Fiction

For tech-sensitive viewers of streaming content it is becoming increasingly difficult to avoid the appearance of autonomous vehicles in serialized television programs. Amazon’s “Upload” and HBO’s “Westworld” are two such examples.

Described as a comedy (with elements of a thriller) “Upload” makes gratuitous use of autonomous vehicles for both plot elements (SPOILER ALERT: two people are murdered by autonomous cars!) and sight gags (i.e. the lead character controls his autonomous car with a joystick like a videogame). In contrast, “Westworld” treats autonomous vehicles – which emerge in Season 3 – as a matter-of-fact, taken-for-granted element in the landscape – including autonomous SUVs, bikes, and motorcycles that arrive and depart on demand with or without passengers/drivers.

Both autonomous vehicle media manifestations reminded me of a recent demonstration I received at Magna Seating of a reconfigurable cabin system. The Magna Seating system, an app-driven seating platform that allows for multiple reconfigurable seating arrangements suitable for a range of applications with – best of all – no need for hands-on seat manipulation.

To be honest, the Magna system had its debut at CES 2019, but I only discovered it recently. The proud creator of stow-and-go seating – which allows seats to “effortlessly” fold up and disappear into the vehicle floor – has outdone itself with this latest concept.

Magna seating video demo and focus group

Having wrestled with primitive third row seats in older Chevrolet Suburbans and having extracted second and third row seats from Plymouth Voyagers, and Toyota Siennas, I was blown away by the Magna concept. In fact, the autonomous cars in “Upload” actually seemed to have already deployed the technology as the characters are portrayed in both leaning forward and lying flat scenarios in the same autonomous car.

Magna’s forward looking vision is especially compelling as U.S. consumers flock to larger cars including crossovers and SUVs. Reconfigurable seating looks like the only way to go.

Since I saw the Magna demo, though, we suddenly have new seating priorities – and that includes both public and private transportation. It won’t be enough to be able to move the seats around via mobile app, passengers will also want some physical barriers or protection. This is a challenge tailor-made for the engineers at Magna and there are plenty of solutions already in the market that will provide some clues.

The Driven reports the impending launch in Sydney, Australia, of a fleet of 120 electric taxis offering a “zero contact” transport alternative from ETaxiCo. The fleet will be based on BYD e6 SUVs and will offer a zero-contact “capsule” to create separate areas for the driver, front passenger and both left and right back seat passengers.

SOURCE: Interior of ETaxiCo taxi as pictured in The Driven report.

Airlines can be expected to take steps of their own to protect the flying public. Among multiple adaptations Avio Interiors’ Glasssafe stands out as a simple solution for isolating passengers in a post-COVID-19 world. There are many others – all of which will be expensive but necessary for airlines to deploy.

SOURCE: Avio Interiors Glasssafe

For me, though, Magna’s concept stands out for its re-imagining of what new value propositions lie in larger vehicles with reconfigurable interiors. The autonomous vehicles in “Upload” almost seem to have stolen Magna’s blueprints as the cabins seem to change effortlessly to suit the customer’s needs. In contrast, again, the “Westworld” autonomous vehicles are purely functional with no reconfiguration razzle-dazzle.

Give me the “Upload” Magna-like experience, thank you. No more battling with bulky seats, no more figuring out stow-and-go protocols, nothing but the touch of a finger on a screen. Perhaps most interesting of all, the autonomous vehicles in “Upload” appear to be owned by the users. The reconfigurable seating helps to facilitate the sense of a multipurpose vehicle that can be rearranged to suit different scenarios.

I love it. Now, Magna, I want to see what you have for that post-COVID-19 car buyer. Whattya say?


MOSFET Gate Length Scaling Limit at Reduced Threshold Voltages

MOSFET Gate Length Scaling Limit at Reduced Threshold Voltages
by Fred Chen on 05-10-2020 at 6:00 am

MOSFET Gate Length Scaling Limit at Reduced Threshold Voltages

As transistor dimensions shrink to follow Moore’s Law, the functionality of the gate used to switch on or off the current is actually being degraded by the short channel effect (SCE) [1-5]. Moreover, the simultaneous reduction of voltage aggravates the degradation, as will be discussed below.

A Practical Lower Limit of Threshold Voltage
First, we will estimate a practical lower limit for the threshold voltage Vth, i.e., the gate voltage at which the transistor is said to turn on. Below the threshold voltage, the current drops off exponentially, in the best case, at a rate of 60 mV/decade, i.e., every 0.06 V reduction below Vth results in the current dropping to 10% of its value (Figure 1). So we can see that if the leakage current at 0V is to be 0.1% (already a large allowance) of its value at Vth, the threshold voltage must be at least 0.18 V. In turn, the power supply voltage Vdd is expected to be several times Vth, e.g., ~ 1V. 60 mV/decade also means the current changes by a factor of 2 for every 0.02V shift. This is important for considering changes in the threshold voltage itself.

Figure 1. Subthreshold slope of 60 mV/decade gives ~0.1% leakage at 0V for Vth ~0.2V. A 20 mV drain-induced barrier lowering (DIBL) leads to ~2X change in current due to the shift of the Ids vs. Vg curve. 

The Short Channel Effect: Drain-Induced Barrier Lowering
Normally, in order to turn the transistor on or off, the gate voltage controls the depletion of charges under the gate, between the source and drain terminals. Basically, as shown in Figure 2, as the gate length Lg is reduced, the source and drain terminals are closer, and the respective depletion layer widths Ws and Wd take up the significant portion of Lg. Specifically, the depths of the source and drain depletion layers cause electric field bending under the gate, which becomes more severe as the source-drain distance is narrowed.

Figure 2. The origin of drain-induced barrier lowering (DIBL). A larger gate (left) has a flat potential contour over most of the gate length, while a shorter gate (right) shows bending of the potential contour.

As a result, when the voltage from the source to drain is increased, the barrier in between is reduced fairly significantly, to the same degree as the voltage on the gate itself. This phenomenon is also known as drain-induced barrier lowering (DIBL). DIBL is generally given as the shift in threshold voltage (the reduction of the barrier) for a given shift in drain-source voltage. Usually the reference drain-source voltage is near zero, while the shifted voltage is near the supply voltage, and the threshold voltage shift is on the order of tens of millivolts. But given that a 20 mV shift already constitutes a factor of 2 change, when Vth ~ 0.2V and Vdd ~ 0.7-1V, a DIBL of 20 mV/V as shown in Figure 1 can therefore be considered an upper limit of tolerance.

Have we already reached minimum Lg?
A minimum gate length of ~20 nm has already been predicted by scientists at IBM [1,5] as well as IMEC [6]. This holds for both SiO2 (minimum 1 nm) and high-k (HfO2 ~4-5 nm) gate dielectrics. It is derived from the characteristic decay length of the lateral electric field under the gate [1].

Figure 3. 2017 field FinFET data showing DIBL degradation for Lg of 20 nm and below [5].

A lower Lg limit of ~20 nm for the planar MOSFET means alternative transistor architectures need to be considered for achieving smaller gate lengths. The most well-known are the FinFET [5] and the surround-gate [7]. On the other hand, a similar Lg limit also appears to have been confirmed by field FinFET data [5] (Figure 3). This is not hard to imagine, as field bending toward the substrate is still possible within the fins. Moreover, in the case of the gate surrounding all sides of the silicon, the gate + 2x oxide thickness (>10 nm) must be added to the silicon body thickness, which hinders scaling of cell height (perpendicular to the gate pitch). By also considering drive current requirements [8], it is also preferred to widen the cell height [7], i.e., there is potential reverse scaling perpendicular to the gate pitch.

Implications
The limitation of the lateral scaling of transistors could portend greater reliance on 3D extension by wafer bonding, such as that implemented in the HBM interface [9]. Or it could be that the future of computing will shift more to memory, particularly those with 3D capacity expansion capability. Thus, the current ongoing developments toward in-memory computing, e.g., [10], are very timely.

References
[1] Y. Taur and T. Ning, Fundamentals of Modern VLSI Devices, 2nd Edition, Cambridge University Press, 2009.

[2] http://www.cs.ucl.ac.uk/staff/ucacdxq/projects/vlsi/report.pdf

[3] https://web.stanford.edu/class/ee316/MOSFET_Handout5.pdf

[4] http://www-inst.eecs.berkeley.edu/~ee130/sp03/lecture/lecture27.pdf

[5] A. Razavieh et al., “Scaling Challenges of FinFET Architecture below 40nm Contacted Gate Pitch,” 75th Annual Device Research Conference, 2017.

[6] http://www1.semi.org/eu/sites/semi.org/files/events/presentations/07_Hans%20Mertens_imec.pdf

[7] N. Loubert et al., “Stacked Nanosheet Gate-All-Around Transistor to Enable Scaling Beyond FinFET,” 2017 Symp. VLSI Technology.

[8] U. K. Das et al., “Limitations on Lateral Nanowire Scaling Beyond 7-nm Node,” IEEE Elec. Dev. Lett. 38, 9 (2017).

[9] https://en.wikipedia.org/wiki/High_Bandwidth_Memory

[10] https://www.researchgate.net/publication/335070394_RRAM_Based_In-Memory_Computing_From_Device_and_Large-Scale_Integration_System_Perspectives

 

Related Lithography Posts


Flex Logix CEO Update 2020

Flex Logix CEO Update 2020
by Daniel Nenni on 05-08-2020 at 10:00 am

FlexLogic Color AI eFPGA

We started working with Felx Logix more than eight years ago and let me tell you it has been an interesting journey. Geoff Tate was our second CEO Interview so this is a follow up to that. The first one garnered more than 15,000 views and I expect more this time given the continued success of Flex Logix pioneering the eFPGA market, absolutely.

What is Flex Logix’ core strength?
My co-founder Cheng Wang invented and refined a superior programmable interconnect which we apply to a range of applications to solve major market needs; then we combine this with the software tools to program the resulting solution. Combined with our design methodology, we can create scalable and portable IP products very quickly and economically.

What markets/applications does Flex Logix play in?
Embedded FPGA (eFPGA)
AI inference
DSP acceleration

You started in eFPGA, how is that market developing for Flex Logix?
We are the “ARM of FPGA technology”: we license eFPGA for integration into SoCs, but we do not build chips.

Using our superior programmable interconnect we are able to achieve Xilinx-like density and performance in any process node using standard cells for rapid development and fewer metal layers.

We have proven eFPGA silicon with numerous customers and chips in 180nm, 40nm, 28/22nm, 16nm and 12nm process nodes. There are >10 working chips using eFPGA and >>10 more in fab and in design and many more planned. Our technology is mature and robust: our 2nd generation architecture is now 3 years old and every chip has worked 1st time.

Our early adopter market segment has been Aerospace (Sandia, Boeing, etc) but commercial design activity is now taking off as well (Morning Core, Dialog, etc). Our eFPGA technology has become strategically critical to many of our customers and they have extensive roadmap plans for a series of chips and they are driving us to improve our offerings to even better meet their needs, creating very high “stickiness”.

Half of our customers are using FPGA chips and want to integrate to reduce power/size/cost. Half of our customers have never used FPGA but use eFPGA for customizability and acceleration.

We provide software tools to program our eFPGA using Verilog.

The eFPGA market now is profitable for us and the cash flow is helping fund our AI Inference initiative.

How did Flex Logix get into AI Inference and why is it synergistic?
Companies like Microsoft use FPGAs in wide deployment to accelerate work loads including inference. Inference uses a lot of MAC operations – FPGAs have a lot of MACs as do GPUs.

Customers a couple years ago asked us if we could optimize our eFPGA for AI Inference. Cheng studied the neural network models, like YOLOv3, and realized we could take our existing DSP MACs and optimize for INT8/BF16 and as well we could increase MAC density by clustering MACs into 1-dimensional systolic arrays of 64 MACs each. Using our programmable interconnect we can wire up MACs in very flexible ways to achieve high MAC utilization and throughput at low die cost for a wide range of neural network models. The resulting product is our nnMAX AI Inference IP which, like our eFGPA, is a tile that can be arrayed to achieve whatever throughput the customer needs for their SoC.

But initially we expect most customers to want to buy chips so we have designed and are taping out now our InferX X1 which is very compact and low cost but has performance that rivals chips 5-10x larger. We will also build PCIe boards and expect to sample in Q3 this year. We recently shared benchmarks vs Nvidia’s leading Xavier NX and Tesla T4, showing we have superior price/performance.

The interesting thing is the relative performance of X1/NX/T4 is very different from one model to another. Our customers did not expect this – they assumed they could get a benchmark for say ResNet-50 batch=1 and that would show relative performance. The reason it doesn’t is different models stress different aspects of the hardware (and software) architectures. For example, ResNet-50 has very small images and activations so it does not stress the memory subsystem; whereas YOLOv3 for megapixel images definitely does.

Our inference technology is available now for 16nm. Our roadmap is to make it available on 7/6nm and 12nm (for our Aerospace customers who want US fabrication).

So then what about DSP?
Just like customers led us to explore AI Inference; customers have asked us “gee, your nnMAX IP has so many MACs in such a small area, can we use it for DSP?”

It turns out nnMAX is excellent for DSP doing FIR filters at up to Gigasample rates and taps of hundreds, thousands or even tens of thousands using the arrayable nnMAX tile. For our ports to 7/6 and 12 we are exploring adding similar FFT performance.

WEBINAR: eFPGA what’s available now, what’s coming & what’s possible to optimize your SoC

About Flex Logix
Flex Logix provides solutions for making flexible chips and accelerating neural network inferencing. Its eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix’s second product line, nnMAX, utilizes its eFPGA and interconnect technology to provide modular, scalable neural inferencing from 1 to >100 TOPS using a higher throughput/$ and throughput/watt compared to other architectures. Flex Logix is headquartered in Mountain View, California. https://flex-logix.com/

Also Read:

CEO Interview: Jason Xing of Empyrean Software

Executive Interview: Howie Bernstein of HCL

CEO Interview: Adnan Hamid of Breker Systems


How to Modify, Release and Update IP in 30 Minutes or Less

How to Modify, Release and Update IP in 30 Minutes or Less
by Mike Gianfagna on 05-08-2020 at 6:00 am

Screen Shot 2020 04 23 at 6.39.05 PM

I had the opportunity to attend a ClioSoft webinar recently on the topic of IP traceability. ClioSoft provides a broad range of tools for design data management and IP reuse. Entitled The New Trend in IP Traceability that IP Developers and Design Managers Rely On, the webinar was presented by Karim Khalfan, director of applications engineering at ClioSoft. Karim has been at ClioSoft for almost 17 years, so he knows a lot about the company’s products and how they are used.

I’ve attended and produced many webinars over the years. There have been a lot more opportunities to do so in recent times. After a while, you identify the winning formula for that special medium of streaming delivery. Focus, clarity, clear examples and above all, brevity are all ingredients that work. I can say that this ClioSoft webinar did everything right. Learning the complexity of IP tracing, along with a clear demonstration of how to address those perils from the perspective of three different users, all in under 30 minutes with a Q&A session as well is impressive. Karim hit all the highlights perfectly.

If you didn’t have the opportunity to attend the event, don’t despair. There is a replay link coming in a bit.  Before we get to that, I’ll give you some highlights of Karim’s presentation.

First of all, why is IP traceability important? There are lots of intuitive responses to this question.  Here are three concrete points to consider:

  • Increase visibility: whether it’s a third-party or internally developed piece of IP, knowing where it’s been used and with what kind of success it has seen are important
  • Improve quality: through tacking what projects are using the IP and how they’re using it
  • Reduce risk: by knowing if you’re using the right version and knowing how it works

Beyond the commonsense reasons for IP traceability, very clear and well documented IP tracing is the price of admission for standards-driven design projects such as those required by ISO26262 and MIL-STD-882.

With some motivation as to why IP traceability is important, Karim discussed the various stakeholders that would be involved in his live demonstration.  There are three:

  • IP Owner: Reviews Jira tickets, modifies IP, releases new versions
  • IP Consumer: Selects the right IP, updates it as needed and integrates the IP into the design project
  • Design Manager: Reviews all aspects of IP updates to ensure the correct IP is being used, analyzes and addresses any conflicts, approves the design and propagates results

So, what could go wrong in the lives of these folks on a real project without the right methodology and tools? Lack of proper notification of IP changes to all the teams and projects that use the IP, inability to find and review all the changes made (especially for binary representations) and incomplete propagation of required changes to all those using the IP are just a few of the headaches one could face.

Karim then ran a series of live demos on a real IP update example from the point of view of the IP owner, IP consumer and design manager. ClioSoft’s SOS7 design management platform formed the backbone of the demo, along with integrations to other key tools like the Jira issue tracking system and the Cadence Virtuoso layout editing platform.  

The demo began with the IP owner logging into Jira to find that an important IP enhancement was needed – reduce the finger count on a precision op amp. Logging into the ClioSoft SOS environment allowed the IP owner to find the IP and see all the design teams and projects that were using the IP. The IP owner then made the required changes in Virtuoso, which is integrated into the SOS environment. A new version of the IP was then checked back into SOS and the users of the IP were notified.

The IP consumer had many paths of notification for this change – it was actually hard to miss. This person then used additional analysis tools provided in SOS to examine the changes in the new version to make sure it was appropriate to update the instances. The changes were then reviewed by the design manager who identified an inconsistency in the use of ATPG in two blocks of the design. This was remedied with a quick query regarding available versions.

That’s a very short overview of the demo. I highly recommend you watch the live version; it shows a lot more details about the capabilities available to all stakeholders in the ClioSoft tools.  You can access the webinar replay here. While you’re on the ClioSoft website, you can check out all of their products to support design data management and IP reuse. At my prior company, eSilicon, we were a ClioSoft customer and found their tools to work well and their customer support to be excellent. 

Also Read

Best Practices for IP Reuse

WEBINAR REPLAY: AWS (Amazon) and ClioSoft Describe Best Cloud Practices

WEBINAR REPLAY: ClioSoft Facilitates Design Reuse with Cadence® Virtuoso®


High-Level Synthesis and Open Source Software Algorithms

High-Level Synthesis and Open Source Software Algorithms
by Daniel Payne on 05-07-2020 at 10:00 am

hls flow min

The DVCon conference and exhibition finished up in California just as the impact of the COVID-19 pandemic was ramping up in March, but at least they finished the conference by altering the schedule a bit. Umesh Sisodia, CEO at CircuitSutra Technologies presented at DVCON on the topic, Using High-Level Synthesis to Migrate Open source Software Algorithms to Semiconductor Chip designs, and I had a chance to review his presentation.

My first exposure to High-Level Synthesis (HLS) was back in 2005 when I worked at Y Explorations, Inc., a company that started out using VHDL or Verilog as the input language, then later focused on C input.

So why would an engineer choose to use an HLS approach over a more traditional RTL coding methodology? With a higher level of abstraction as an input designers can separate design from implementation, use up to 10X less code which reduces design efforts, and benefit from 10 to 1,000X faster simulation speeds making them much more productive.

Additional reasons to consider using an HLS flow:

  • One source, many implementations (ASIC, FPGA, eFPGA)
  • Optimize for wide array of Power, Performance and Area
  • Embedded SW engineers can use FPGAs
  • Lots of C/C++ tools available
  • Open Source algorithms can be re-used

The focus area of this blog is how to migrate the open source software algorithms to Verilog and accelerate these inside the semiconductor chips.

Many semiconductor companies are designing custom SoCs for emerging domains like Vision, Speech, Video / Image processing, 5G, Deep learning etc.. In these domains lots of algorithms are already available as a software implementation, either as a free open source version, or the companies have their own software implementation.

In general, the software world has a huge code base available as free and open source code, most of which is widely used by the industry and is thoroughly verified. Many popular algorithms are available as an open source implementation, along with comprehensive reference test suites.

CircuitSutra is in the process of defining a robust methodology where an existing software implementation can be quickly implemented into silicon, creating a big game changer for the industry.

Engineers at CircuitSutra migrated the open source C implementation of a Sobel filter to Verilog using a High Level Synthesis design flow.

Sobel Filter Example

For computer vision and image processing applications there’s an edge detection algorithm called a Sobel Filter, and it’s found on Github as Open Source. The filter generates a 2D map of the gradient, and it finds the direction of largest increase from light to dark, and then rates the change in that direction. Here’s an example starting image shown on the left, the gradients, and the filtered result:

HLS Flow

The team then modified the C code for the Sobel Filter to make it work with the synthesizable subset, then generated Verilog code using the Mentor Catapult tool.

There is a comprehensive and well defined set of guidelines for using a synthesizable subset of C / C++ / SystemC which needs to be followed. The important points of these guidelines are listed below:

  • HLS tool parse the code to extract the design intent, and the entire functionality should be extractable at compile time. Any functionality that is determined at run time cannot be extracted by the tool. Constructs must be unambiguous and of fixed size.
  • C functions synthesize into RTL blocks, and function arguments synthesize to RTL I/O. Arrays in the C code synthesize to memory: RAM / ROM / FIFO.
  • Datatypes of the variables impact the precision, area and performance of the RTL. A generic 32bit integer can be avoided if a 10 bit integer is sufficient. HLS tool vendors provide their own implementation of datatypes for usage in synthesizable code. The Algorithmic C datatypes (AC datatypes) from Mentor Graphics were used in this exercise.
  • The synthesizable code cannot use function calls from other libraries which are not synthesizable, you need to find a corresponding synthesizable library or implement it yourself. The math.h functions used in the code were replaced with the corresponding function calls from the ac_math.h / ac_dsp.h from Mentor Graphics.

Not all C / C++ constructs can be synthesized. You should avoid memory allocation, OS system calls, function pointers, STL classes, non const global variables, utility libraries etc..

One of the benefits of the proposed methodology is to reuse the test suite of the original software implementation to verify the final RTL implementation.

Most of the time, the original software implementation will have a comprehensive functional test suite, if not it will be a good idea to start by creating such a test suite. At this stage, the code base is smallest and execution speed is fastest, so comprehensive functional verification at this stage requires minimal efforts

After refining the source code to make it compliant with the synthesizable subset, you re-use the same test suite to ensure that functionality is still intact. The synthesizable code is still C / C++ code which can be compiled using gcc compiler, and does not require any specific tool set or specialized setup to take it through the original test suite. Some minor updates in the testbench may be required. It will be good to use the original software implementation as the golden reference to verify the synthesizable implementation.

Next, you synthesize the refined implementation using the HLS tool to generate RTL. For the functional verification of the RTL it is advisable to re-use the same original test suite and use the original software implementation as the golden reference. So, you can very quickly ascertain that the resulting RTL is functionally correct. This kind of setup will require Verilog-C/C++ co-simulation, and Mentor Catapult  provides the SCVerify flow for verification setup.

A testbench was created to validate that the algorithm is working properly, and that testbench can be used at both the C++ and RTL levels.

With the flow explained so far, software experts can easily take the original software implementation and generate functionally correct RTL, without requiring in-depth knowledge of RTL. They just need to understand the synthesizable subset.

The RTL generated with these steps will be functionally correct, however will definitely not be in the usable form yet, as it is not fully optimized for the specific target implementation (FPGA / ASIC / technology nodes) or for specific target application requiring certain Power Performance Area matrix. To get the optimized RTL, you will have to play with the HLS tool directives and constraints, and may have to further refine the synthesizable code by using optimization directives or tool pragmas at the right places. It also requires re-structuring of synthesizable code to capture a bit of macro architecture = Registers, Memory, Interfaces etc.. This exercise requires strong understanding of the RTL, and cannot be done by software experts. The good news is that by now you already have a robust functional verification setup, and with each optimization iteration you can quickly ascertain that the implementation (C as well as RTL) is still functionally correct. This cycle of  refine – optimize – verify continues till you get the final RTL that meets requirements.

Open Source HLS Libraries

Software developers generally have access to lots of free general purpose libraries, however these cannot be readily used in the synthesizable code. You need to find a corresponding HLS library, or implement it yourself as per synthesizable subset. Few HLS libraries come bundled with HLS tools , but there are a few open source HLS libraries available:

Xilinx provides an HLS tool named Vivado that is widely used in the FPGA community to implement the designs at a higher abstraction level using C / C++ / SystemC. The HLS libraries provided by Xilinx works with their HLS tool, but not with Mentor Catapult and other tools. The HLS libraries provided by Mentor Graphics work with Catapult only, so it is recommended to write the synthesizable code in a tool independent fashion, so that same code can be easily re-used across multiple projects targeted for different technologies (FPGA, ASIC / SoC). There are some minor differences in how the synthesizable code has to be written for different tools. The article ‘Porting Vivado HLS Designs to Catapult HLS Platform’, provides a good summary of differences in writing code for Xilinx Vivado and Mentor Catapult. The CircuitSutra team used these concepts to migrate some of the open source Vivado HLS libraries to work with Mentor Catapult.

CircuitSutra is also in the process of developing tool independent HLS libraries corresponding to widely used software libraries.

Advanced ESL Flows

The methodology under consideration opens the door for various advanced ESL flows that were mostly a wish list.

Apart from High-Level Synthesis, the other widely used use-case of ESL methodologies is virtual prototyping. Virtual prototypes are the fast simulation of models for SoCs and systems. These are used for pre-silicon, embedded software development, SoC level & System level co-design and co-verification, automated unit testing of firmware, architecture exploration etc.. Virtual Prototyping uses a CPU Instruction Set Simulator (ISS) along with IP models and memory models The models for virtual prototypes are developed using SystemC, which is a C++ library.

There has always been talk in the industry to have a single model which can be used in virtual prototypes, and also synthesized using a HLS tool to generate RTL. However both use-cases require different kinds of high level code. HLS requires code which is compliant with the synthesizable subset, virtual prototypes requires code which can simulate as fast as possible. Virtual prototype models uses the concepts like Transaction Level Modeling (TLM) and loosely timed (LT), and can use any constructs of C & C++.

Starting with the same original open source software implementation, you can now create models for both High-Level synthesis and virtual prototypes. While I explained how to make the code synthesizable, developing the model for virtual prototype is even simpler, you just need to wrap the software implementation in the SystemC and implement the transaction level interfaces and other macro-architecture details of the IP like registers, memory etc.. The same test suite can be used for the verification of both models.

You can also add hybrid modeling by simulating parts of your design in virtual prototype, RTL simulation, FPGA chips or emulator boxes.

A virtual prototype enables verification at the SoC level using bare metal tests, firmware embedded application. For maximum productivity and re-usability, you can move step by step. As a first step, run these tests on the pure virtual prototype having TLM models of IP. In the next step, you can replace the TLM version of a specific IP block with the synthesizable version of C / C++ / SystemC implementation and verify it with the same test suite. Finally, through co-simulation you replace the specific IP block with the RTL implementation and verify it with the same test suite. With each step you are moving to the slower simulation, and the objective is to catch as many bugs as possible early in the cycle when simulation is fast.

The RTL IP have to be thoroughly verified using SystemVerilog and a UVM environment. The same environment can be used to further verify the TLM models and synthesizable models. This will ensure complete equivalence at all abstraction levels.

These advanced flows are also likely to enable the effective usage of the upcoming standard Portable Stimulus, which allows you to generate different flavors of test cases from the same verification intent.

Summary

HLS is a proven approach for ESL design, and moving up from RTL coding to HLS will give you time to actually explore the design space and make early trade-offs. Because SW is written in C and C++, you can simulate both SW and early HW together early, always a good thing instead of waiting for silicon to arrive. Virtual Platforms allow you to decide what goes into SW and HW.

Companies like CircuitSutra have deep experience using these ESL approaches to implement new design products quickly and correctly.

About CircuitSutra

CircuitSutra is an Electronics System Level (ESL)design IP and services company, headquartered in India, having development centers in Noida and Bangalore, and an office in Santa Clara CA. It enables customers to adopt advanced methodologies based on C, C++, SystemC, TLM, IP-XACT, UVM-SystemC, SystemC-AMS, Verilog-AMS. Its core competencies include Virtual Prototype (Development, Verification, Deployment), High-Level Synthesis, Architecture & Performance modeling, SoC and System-Level co-design and co-verification.

CircuitSutra’s mission is to accelerate the adoption of ESL methodologies in the Industry.

CircuitSutra provides best in class ESL experts, who works as an extension of customer’s R&D team, either remotely through offshore development center (ODC) model or onsite at customer location. CircuitSutra provides re-usable modeling IP & methodology, that helps the customers to quick start their modeling projects. It also provides specialized SystemC training that helps customers to groom the non-SystemC professionals to become virtual prototyping experts.

Related Blogs


Ultra-Low Power Inference at the Extreme Edge

Ultra-Low Power Inference at the Extreme Edge
by Bernard Murphy on 05-07-2020 at 6:00 am

Intelligent IoT

I wrote last year about Eta Compute and their continuously tuned dynamic voltage-frequency scaling (CVFS). That piece was mostly about the how and why of the technology, that in self-timed circuits (a core technology for Eta Compute) it is possible to continuously vary voltage and frequency, whereas in conventional synchronous logic it’s only possible to switch between a few discrete voltage and frequency options. You might think ‘self-timed, this must be about performance’ but in fact Eta Compute is pushing it for ultra-low power at the extreme edge in AI applications.

I haven’t talked with them in a while, so I confess I’m catching up. From what I see, it looks like they’ve found their sweet spot, a very targeted application in power constrained applications where some level of inference is required. They cite as examples intelligent sensing and/or voice activation and control in:

    • Building: thermostats, smoke detector, alarm sensors
    • Home consumer: washing machines, remote control, TV, earbuds
    • Medical and fitness: fitness band, health monitor, patches, Hearing aid
    • Logistics: asset tracking, retail beacon, remote monitoring
    • Factory: motors, industrial networks, industrial sensors

The most recent Eta Compute solution is realized in their ECM3532 neural sensor processor. This is a system on chip with an Arm Cortex-M3 processor and an NXP CoolFlux DSP, 512KB of Flash, 352KB of SRAM, and supporting peripherals.  All of this is built with Eta Compute’s proprietary CVFS (continuous voltage frequency scaling) technology, operating near threshold voltage.

The dual-MAC DSP handles signal processing from sensors, feature extraction and inferencing. The MCU handles application software, control and networking. I’ve seen this combo in other products (though not built on CVFS technology) so it looks like an up and coming architecture to me.

Eta Compute’s benchmarking shows the Cortex MCU running at up to 10X lower power than competitive solutions across a wide range of temperatures and process corners. Even more important, they ran a range of neural net benchmarks: image recognition, sound recognition (eg glass breaking), motion sensing, always-on keyword recognition and always-on command recognition. In all case they are running in a few hundreds of micro-amps and performing multiple inferences per second (up to 50 for motion sensing).

Overall, Eta Compute say they can already reduce power in AI at the extreme edge by a factor of 10. This is for published networks, not specifically optimized to extreme edge applications. They have been running trials with partners to further optimize networks and have already demonstrated an additional 10X increase in efficiency in image recognition through reducing operations by a factor of 10 and weight sizes by a factor of 2. Comparing that with a common MCU-only implementation, they claim 1000X higher efficiency.

At these numbers, intelligence at the extreme edge could become ubiquitous, even down to truly remote, coin-cell operated devices, asset-tracking devices, even energy-harvesting devices. Eta Compute don’t yet want to provide customer names but sounds like they have quite a few already in development.

Eta Compute recently released a white paper – Deep learning at the extreme edge: a manifesto – on their vision and technology. You can download the white paper HERE.


Tech Shows up for COVID-19: Time to Expand Horizons

Tech Shows up for COVID-19: Time to Expand Horizons
by Terry Daly on 05-06-2020 at 10:00 am

Covid Tech 2020

Bring digital technology solutions to bear on more of our toughest societal problems 

(Illustration/iStock)

“We are all in this together”. The world faces 250,000 COVID-19 deaths, each a tragic human story. The pandemic will bring a litany of “lessons learned” including lack of preparedness, slow response and uneven recovery. The rapid pivot by the political class from response to recrimination amplifies the tragedy and the economic pain from shutdowns.  But a secondary story line holds hope for the future: the response by the technology sector (Tech) in rising to the challenge of this humanitarian crisis.  The actions taken by Tech were immediate, generous, targeted and impactful. Thank you! Regrettably, there are equally perilous humanitarian crises hiding in plain sight. Each deserves from Tech the same spirit of collaboration and intensity of response as with COVID-19.

What a heartwarming response by Tech! Millions of dollars donated to charitable organizations world-wide along with tens of millions of masks and personal protective equipment provided to front-line medical staff. Real estate was made available for hospital overflow. Product shipments were prioritized and expedited for medical applications. Specific examples abound. AMD created a $15 million initiative to provide high-performance computing (HPC) platforms and resources to accelerate medical research. Intel carved out $50 million to fund access to its technology at medical points of care, speed scientific research and increase access to on-line education. NVidia offered free access its Parabricks offering, enabling researchers 50 times faster analysis of genomic sequences.

Apple and Google joined to create a contact tracing app. Google provided free access to Hangouts Meet videoconferencing to support remote education.  Microsoft helped the CDC develop a tool to assess COVID symptoms and suggest patient courses of action. One million messages per day were fielded helping doctors and nurses prioritize and provide care for those most directly in need.  IBM led a public-private COVID-19 HPC Initiative with free supercomputer use, free access to its patent portfolio for COVID research and blockchain support to help governments and healthcare groups address supply shortages. These examples are merely illustrative of a much broader Tech response.

COVID has extracted an enormous toll, a loss of life and livelihood across the globe. But there are other humanitarian catastrophes hiding in plain sight. Much of the world at large seems to have become numb to these crises, to have developed a collective immunity from response.  Poverty, hunger and a lack of access to clean water, shelter, education and basic health services extract annual death tolls far exceeding COVID.  United Nations data shows that minimally ten percent of the global population, over 700 million people, lack access to clean water and live in extreme poverty. Five million children die every year due to poor health services; 265 million children are out of school due to lack of access to education and the need to focus on survival. Political strife has created 70 million refugees.

With a COVID-awoken sensibility, can Tech mobilize to solve these intractable problems?  Innovation is emerging to make it happen: AI, 5G, blockchain, IoT, quantum computing, autonomous transport, and others.  COVID has accelerated our transformation into the digital economy on a pace unimaginable in December 2019, notably in on-line education, telemedicine and digital payments. In its pandemic response announcement, Intel said its “… technology underpins critical products and services that global communities, governments and healthcare organizations depend on every day. We hope that by harnessing our expertise, resources, technology and talents, we can help save and enrich lives by solving the world’s greatest challenges through the creation and development of new technology-based innovations and approaches.”  Spot On!  But the world needs Tech to move beyond “hope” and establish a concrete path to close massive inequality gaps and establish a truly inclusive global society.

The familiar Moore’s Law can provide inspiration. Although 50 years old, Moore’s law was futuristic, predicting a doubling in circuit density every two years. But it was not pre-ordained; rather it was achieved by innovation, hard work, collaboration, investment and commitment across Tech.  Process technology “roadmaps” and targeted parameters pointed to the milestones necessary to stay on the curve. The industry made Moore’s Law a reality, and it continues to set audacious goals, invest, compete, collaborate, innovate, solve challenges and reward success. This is the approach needed to tackle society’s toughest problems. The private sector is best positioned to make it happen.

But is this the proper role for Tech and the private sector? Companies need a maniacal focus on product development, market validation, execution and scaling to be successful. Wealth creation is the incentive engine that drives success. Contributing to the larger societal benefits as envisioned here seems out of scope. How to proceed? Tech and Venture Capital can support passionate non-profit entrepreneurs with know-how, access to IP, funding and emerging digital technology to solve the toughest societal issues.

Take the example of “charity: water”, founded in 2006 by a non-Tech entrepreneur with the mission to bring clean and safe drinking water to every person living without it.  His team established a technology-based digital marketing and fund-raising platform that matched donors to water projects. They established ecosystem partners for local implementation and a remote monitoring tool using IoT sensors and cloud computing technology to provide real-time data on water system performance, assuring sustainability of investment. By year-end 2019, 1 million donors contributed $450 million to over 51,000 water projects in 28 countries, ultimately providing more than 11 million people with clean, safe drinking water.  Charity: water is inspiring, impactful and scalable. Imagine how much broader and faster the impact could be with concerted and sustained support from Tech!

The opportunity is now to bring the enormous power of digital technologies to tackle poverty, hunger, water, shelter, health and education with the same focus, purpose, partnership and investment that was brought to bear on COVID-19. Will Tech be all in this battle together?

Terry Daly is a retired semiconductor industry executive


Reliable Line Cutting for Spacer-based Patterning

Reliable Line Cutting for Spacer-based Patterning
by Fred Chen on 05-06-2020 at 6:00 am

Reliable Line Cutting for Spacer based Patterning

Spacer-defined patterning is an expected requirement for advanced semiconductor patterning nodes with feature sizes of 25 nm or less. As the required gaps between features go well below the lithography tool’s resolution limit, the use of cut exposures to separate features is used more often, especially in chips produced by TSMC or Intel, where “cut poly” and “cut metal” are applied [1,2]. However, line cutting introduces new concerns, such as placement error as illustrated in Figure 1.

Figure 1. The effect of line cut placement error is to increase the risk of arcing across the narrowest portion of the gap (right).

The cut itself is expected to be round when confined to very small spaces. These will lead to burrs or spurs at the cut locations. Moreover, the cut itself cannot be perfectly placed all the time, and this leads to the spurs narrowing the gap toward one side. Consequently, unwanted arcing across the gap is likely. Fortunately, there are a number of ways to address this issue today.

Solution 1: Design rule/layout restrictions

The quickest way to avoid these issues is to have enough clearance for the cuts not to be rounded. This would mean the layout of Figure 1 with the gaps close to one another would become forbidden as part of the Design Rule Check (DRC) violations [2]. On the other hand, some layouts such as DRAM active area (see Figure 2 for example) are more tolerant.

Figure 2. The cut for the DRAM active area shown here (18.4 degrees from vertical) is achievable by immersion single exposure with a phase shift mask for features<0.5 wavelength/NA[3], for cut pitches of 80 nm and above. For smaller cut pitches, double patterning, e.g., spacer double patterning, would be required.

Solution 2: Cut grid with selection mask

If the layout of Figure 1 must be used, then a different process must be used to ensure a straight edge cut. One possible approach is to use a cut grid with a selection mask[4]. This is illustrated in Figure 3.

Figure 3. Process sequence for cut selection from a pre-defined cut grid.

This particular approach would entail that the cut would require three masks instead of one. The first two masks would define the cut grid in an etch mask over the pre-patterned lines. The first mask would define a grid of lines perpendicular to the lines to be cut, and a second mask would define posts where the rectangular target cut locations would be separated. A third mask would select the actual cut locations from the grid. The advantage of the cut selection approach is that the cut grid is already predefined with straight edge cuts. However, it does require more masks and process steps.

Solution 3: Self-aligned blocking (or cutting)

A reduction of masks is possible with the self-aligned blocking (SAB) approach [5]. In this approach, the spacer-defined lines are divided into two groups in an ABAB.. fashion, where any two adjacent lines separated by spacers belong to different groups (A or B). Two different materials in the process flow are used to represent each group (Figure 4). These two materials are selected so that one may be etched without affecting the other. The spacers in between the two materials are also not etched. Consequently, a cut across five lines may cut the two selected lines with the straight edge allowed by the longer length of the cut. There is one cut mask for removing A material only and one for removing B material only. Note that the cut masks may also make use of spacer-defined double patterning [6]. The emergence of SAB means that two masks (for A and B) will be used independently of wavelength.

Figure 4. Self-aligned blocking (or cutting) approach makes use of etch selectivity to avoid unwanted cutting of adjacent feature lines.

References

[1] M. C. Smayling, V. Axelrad, “Simulation-Based Lithography Optimization for Logic Circuits at 22nm and Below,” SISPAD 2009.

[2] https://www.design-reuse.com/articles/45832/design-rule-check-drc-violations-asic-designs-7nm-finfet.html

[3] L. W. Liebmann, J-A. Carballo, “Layout Methodology Impact of Resolution Enhancement Techniques,” Proc. 2003 Intl. Symp. Phys. Design, 110.

[4] http://www.tela-inc.com/wordpress/wp-content/uploads/2012/05/SPIE-2013_8683.pdf

[5] https://www.researchgate.net/profile/Angelique_Raley/publication/316087783_Self-aligned_blocking_integration_demonstration_for_critical_sub-40nm_pitch_Mx_l5vel_patterning/links/59f08353aca272cdc7ca3200/Self-aligned-blocking-integration-demonstration-for-critical-sub-40nm-pitch-Mx-level-patterning.pdf

[6] US Patent 9240329, assigned to Tokyo Electron Limited, filed Feb. 17, 2015.

Related Lithography Posts