RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Internet of Things and the Wearable Market

Internet of Things and the Wearable Market
by Daniel Nenni on 03-09-2014 at 8:00 pm

My wife and I drove to Southern California last week in search of information on the wearable computing market. After stops in Irvine, San Diego, and some play time in La Jolla we returned in time for the CASPA Symposium: “The Wearable Future: Moving Beyond the Hype; the Search for the Holy Grail and Practical Use Cases”. CASPA is the Chinese American Semiconductor Professionals Association and their Spring Symposium was at the Intel HQ Auditorium in Santa Clara with a standing room only crowd.


The big attraction for me was the keynote speaker Dr. Reza Kazerounian, SVP & GM, Microcontroller Business Unit of Atmel. I originally ran across his name during my research for “A Brief History of STmicroelectronics” (the piece I did last week) as he was CEO of ST Americas from 2000 to 2009. It was truly an honor to hear Dr. Reza Kazerounian speak.

The Internet of Things (IoT) is opening up fresh horizons for a new generation of intelligent systems that leverage contextual computing and sensing platforms, creating new markets. One of these platforms is the wearable category of devices, where the combination of sensors using low-power sensor fusion platforms, and short-range wireless connectivity, are giving rise to a variety of exciting end markets. From self-quantification to a variety of location-based applications, to remote health monitoring, wearables are becoming the harbinger for a whole host of services. With the right set of biometric sensors combined with local fast data analytics, wearables have the potential to revolutionize the health care industry. These devices can provide real-time data and contextual information along with all the health care requirements, improving the quality of care, and lowering the overall cost of care. This discussion will review the underlying technologies needed to make the “always-on health care revolution” happen, and explore how the future of medicine is being shaped by wearable devices.

Contextual computing is the key term here and, yes, I had to look it up. The application I’m most interested in, besides fitness, is security. I want my smartphone to know it is me holding it by my movements, voice, and usage. I remember back when my credit card kept getting security flagged when I started traveling internationally. Once Visa profiled my usage it never happened again. As the smartphone takes over our financial lives, security will be even more critical, absolutely.

There are three key components to wearable market silicon: Low power, low cost, and low area. Billions of these devices will be deployed over the next 10 years so the market will by far exceed smartphones. The wearable market will be very fragmented which opens up opportunities for entrepreneurs around the world. In fact, Dr. Kazerounian predicted that 15% of those devices will come from companies that are less than 3 years old to which I agree wholeheartedly.

One of the big challenges is low power connectivity. For now these devices will be talking to our smartphones and that means ultra-low power connectivity. Coincidentally Atmel just announced a new SmartConnect family that combines Atmel’s ultra-low power MCUs with its wireless solutions and complementary software into a single package:

“Ultra-low power wireless connectivity is critical for embedded applications in the era of the Internet of Things,” said Reza Kazerounian, Sr. Vice President and General Manager, Microcontroller Business Unit, Atmel Corporation. “Atmel’s SmartConnect technology is about simplifying the use of embedded wireless connectivity technologies and enabling users to accelerate their time-to-market. This simplicity allows all players to participate in the IoT market, fueling the innovation needed to accelerate adoption.”

Celebrating their 30th year, Atmel is an IoT market leader with an interesting history that you can read about HERE.

More Articles by Daniel Nenni…..

lang: en_US


Semiconductor Strategy – From Productivity to Profitability

Semiconductor Strategy – From Productivity to Profitability
by Pawan Fangaria on 03-08-2014 at 8:30 am

The semiconductor industry seems to be the most challenged in terms of cost of error; a delay of 3 months in product development cycle can reduce revenue by about 27% and that of 6 months can reduce it by almost half; competition is rife, pushing the products to next generation (with more functionality, low power, high performance, more compact, better graphics and much more) in short intervals. This trend has clearly segmented the semiconductor market into design creators (IP vendors focusing on most PPA optimized IPs) and design integrators (SoC vendors focusing on overall quality, cost and time-to-market); fabs with ever shrinking technology node and complexity remaining concentrated among a few with large capital investment capability.

Considering these challenges, the semiconductor companies are focused on improving their processes, developing expertise to handle new complexities, increasing verification coverage to improve quality and so on. While these initiatives definitely improve productivity, today’s business environment needs a greater focus on profitability improvement. Higher cost and reduced profitability has led to mergers of several organizations even though they were productive. It’s no secret, to remain profitable, companies have to be closer to their customers, be collaborative, produce what they require fast and re-use whatever they can, but how?

Last week I was talking to Michael Munsey, Director of Semiconductor Strategy at Dassault Systemes. I was very impressed with the kind of strategy and a broader solution framework they are putting in place to address the issue of productivity and its transformation to profitability in the semiconductor industry. Just to be curious I read into Dassault’s own profitability and that was impressive too; with ~$2.6B revenue and ~31.6% operating margin (Non-IFRS), it’s no wonder Forbes keeps this company in the list of world’s most innovative companies!

So, the idea Dassault has put together seems to be very innovative which relates to the current semiconductor market segmentation and aligns itself with that such that every stakeholder in the design chain (or rather complete product cycle) can gain maximum value out of the product. How’s that possible?

Considering the global business reality the 3DEXPERIENCE platform constituted by Dassault focuses on best design creation, flawless integration and manufacturing optimization that lead to profitability. In order to create best devices, ‘Product Engineering’ framework provides requirement specification management and New Product Introduction (NPI) to develop what customer (or market in broader sense) needs and continuous defect tracking and resolution to remain relevant. ‘Design Engineering’ then manages IPs, their protection, integration of IPs through collaboration between various teams and verification of complete design against the specifications. Finally, ‘Manufacturing Engineering’ works through device configurations, packaging simulations, and analyzing and optimizing yield. The overall platform is geared towards rapid design integration of best devices through highly collaborative environment for analysis, prototyping and optimization, and then cost optimized, risk reduced manufacturing that can generate profit.

The 3DEXPERIENCEplatform has four major solution spaces. Design Collaboration solution provides ‘Semiconductor Collaborative Design’ that includes issues & defects tracking and change management; requirement, traceability & test; and project & portfolio management. Enterprise IP Management provides ‘Semiconductor IP Management’ that includes issues, defects & change management; requirement, traceability & test; and project & portfolio management. Requirement Driven Verification provides ‘Semiconductor Verification & Validation’ that includes the whole Collaborative Design and IP management pieces along with the common issues, defects & change management; requirement, traceability & test; and project & portfolio management. Then there is Manufacturing Collaboration which provides ‘Semiconductor Manufacturing Configuration’ that includes Semiconductor Packaging Simulation, Semiconductor Manufacturing Process Improvement, project & portfolio management and requirements, traceability & test.

These solution pieces together form a system which checks on wastage of resources and efforts; cost of NPIs, quality processes, misalignments in product and requirements and so on to reduce re-spins, increase re-use and optimize resources thus resulting into profitable business. In future, I will talk more on these individual pieces and exciting stories about how these solutions together address global semiconductor design and manufacturing challenges. Stay tuned!

More Articles by Pawan Fangaria…..

lang: en_US


IC Layout with Interactive or Batch DRC and LVS Results

IC Layout with Interactive or Batch DRC and LVS Results
by Daniel Payne on 03-07-2014 at 6:27 pm

IC designers have a long tradition of mixing and matching EDA tools from multiple vendors, mostly because they enjoy best-in-class tools, or they just purchased each EDA tool at a different time and asked for them to work together. Such is this case with IC layout tools from Silvacoand DRC/LVS tools from Mentor Graphics. Pawan Fangaria blogged about the Results Viewing Environment (RVE) of Calibre back in October 2013. Today I learned that the IC layout tool from Silvaco is called Expert, and that it has an integration with Calibre RVE.

Continue reading “IC Layout with Interactive or Batch DRC and LVS Results”


Key Ingredients for ESL Power Modeling, Simulation, Analysis and Optimzations

Key Ingredients for ESL Power Modeling, Simulation, Analysis and Optimzations
by Daniel Payne on 03-07-2014 at 6:00 pm

There’s a French EDA company named DOCEA Powerthat is uniquely focused on power analysis at the ESL level and I had a chance to interview Ridha Hamza to get new insight on ESL design challenges and their approach. Ridha started out doing SRAM design at STMicroelectornics in the 1990’s, moved into the emerging field of MEMS, and finally joined DOCEA Power four years ago.


Continue reading “Key Ingredients for ESL Power Modeling, Simulation, Analysis and Optimzations”


On-Chip Clock Generation beyond Phase Locked Loop

On-Chip Clock Generation beyond Phase Locked Loop
by Daniel Nenni on 03-07-2014 at 8:00 am

Inside a today’s typical VLSI system, there are millions of electrical signals. They make the system perform what it is designed to do. Among those, the most important one is the clock signal. From an operational perspective, clock is the timekeeper of the electrical world inside the chip/system. From a structural perspective, clock generator is the heart of the chip; clock signal is the blood; and clock distribution network is the vessel.

Timekeeper has played and is playing a critical role in our human life. History shows that the progressive advancement of our civilization is only made possible by the steady refinement of the timekeeper: the clock/watch. The same is true for VLSI system. The purpose of VLSI system is for processing information. The efficiency of performing this task is highly dependent on the time scale used. This time scale is controlled by the clock signal. It has two key aspects: its size (the absolute clock frequency) and its resolution (the capability of differentiating nearby frequencies, or the frequency and time granularity). In addition, another characteristic is also important: the speed that time scale can be switched from one to another (the speed of clock frequency switching). Phase Locked Loop (PLL) has traditionally been used as on-chip generator of clock signal. It is a beautiful blend of digital and analog circuits in one piece of hardware. From a reference time scale, it can generate other time scales. However, due to its usage of compare-then-correct feedback mechanism, the choice of time scales that can be produced is limited. Equally harsh is the problem that the change of time scale (frequency switching in PLL) takes very long time. Although PLL has played a key role that makes today’s VLSI system magnificent, these two problems are limiting chip architect’s capability for creating further innovation.

The source of the problem originates from the very fact that electrical circuit is not born for handling time, but magnitude (or level). Inside a circuit, information is represented by the medium of electron. It is created on the magnitude of electron flow, using proportional (analog) or binary (digital) relationships. Time is created indirectly through a voltage level crossing a predetermined threshold. Therefore, the task of building a timekeeper inside VLSI system is inherently difficult since it relates two basic properties of the universe: time and force. In implementation, another fact has made the task of creating time inside circuit even more challenging:since the first day that clock signal is introduced into VLSI design, it is assumed that all the pulses inside a particular clock pulse train have to be equal-in-length. This presupposition has limited our options in the creation of timekeeper circuit. Consequently, our current solution is not completely satisfactory: 1) we cannot generate any arbitrary frequency we want. 2) we cannot switch frequency quickly.

Since timekeeper controls VLSI system’s operation pace through clock-driving-circuit, a fundamental question can be asked: do all the pulses in a clock pulse train have to be equal-in-length? This question is equivalent to asking: what does clock frequency really mean? In 2008 a novel concept, Time-Average-Frequency, is introduced. It removes the constraint that all pulses (or clock cycles) must be equal-in-length. It is based on the understanding that clock frequency is used to indicate the number of operations executed (or events happened) within the time window of one second. As long as the specified number of operations is completed successfully in a specified time window (such as one billion operations within one second for a 1 GHz CPU), the system does not care how each operation is carried out in detail. This breakthrough in clock frequency concept is crucial. It can free our hand in making the clockwork.


Figure Clock as a technology.

From the day of Robert Noyce and Jack Kirby’s first integrated circuit in 1959 to today’s system of billions-transistors-on-a-chip, the art of integrated circuit design can be roughly individualized into three key areas: processor technology, memory technology and analog technology. Processor technology focuses its attention on how to build efficient circuit to process information. Using transistors to do logic and arithmetic operation with high efficiency is at its highest priority. Memory technology is the study of storing information in circuit. Its aim is to store and retrieve information in large amount and in high speed. Analog technology squares its effort at circuit of interfacing electrical system with human. Inside VLSI system, information is processed in binary fashion. Once outside, information is used by us in proportional style since our five senses is built upon proportional relationship. Analog circuit is the bridge in between. During the past several decades, the advancements in these three circuit technologies have made today’s VLSI system very powerful. However the driver of these three technologies, the clock, has not seen fundamental amelioration. The time scale is not flexible: the available clock frequencies are limited; the switching between frequencies is slow.

To improve VLSI system’s information-processing-efficiency further, the next opportunity is at the method of clocking: 1) we need a flexible on-chip clock source; 2) and it needs to be available to chip designer at a reasonable cost. Now is the time for clock being recognized as a technology, as illustrated in above figure. In this field, there are four key issues: high-clock-frequency, low-noise, arbitrary-frequency-generation and fast-switching. The first two have been studied intensively by researchers. The last two have not drawn much attention. There are two reasons. The first one is that arbitrary-frequency-generation and fast-frequency-switching are difficult to achieve, especially to achieve them simultaneously (in contrast, arbitrary-voltage-generation and fast-voltage-switching are easy to do). The second reason is that chip/system architect has not asked for it. As a result, circuit designer does not have the motivation. These two factors are cause-and-effect of each other: system architect does not know that it can be done; circuit designer does not know that it is needed. The goal of this article is to break this lock, to provide a vision that it can be done and it is useful. The aim of Time-Average-Frequency is to provide the means of making flexible on-chip clock source available to chip designer. This concept and technology is a link between circuit and system: a circuit level enabler for system level innovation.

This book “Nanometer Frequency Synthesis beyond Phase Looked Loop” introduces a new way of thinking about the fundamental concept of clock frequency. It presents a new circuit architecture for frequency synthesis: Time-Average-Frequency based Direct Period Synthesis. It proposes a new circuit component: Digital-to-Frequency Converter (DFC). Its influence can go beyond clock signal generation. It is a new frontier for electronic system design.

Nanometer Frequency Synthesis Beyond the Phase-Locked Loop (IEEE Press Series on Microelectronic Systems) by Liming Xiu

lang: en_US


The (re)making of Arteris, 1-2-3

The (re)making of Arteris, 1-2-3
by Don Dingee on 03-06-2014 at 6:00 pm

Success in a business with extended design-in cycles may look easy. In reality, there is a delicate balance between many factors. Some come to mind immediately: developing and releasing a good product in the first place; winning and keeping the right customers, not too few or too many; balancing investment between support and R&D; and constantly evaluating the landscape for disruptive change and newer ideas.

The semiconductor intellectual property business can test the mettle of even seasoned industry veterans. After all, an IP vendor only makes one piece of a solution; success means helping a customer make something complete that ships in volume with the idea inside. The bulk of the credit often goes unrecognized by other potential customers because of NDAs and confidentiality.

There is perhaps no greater endorsement for a company and their technology than when a superpower swoops in and buys the rights to the mainline IP by acquiring all or part of the firm. When our SemiWiki staff last visited one of these cases at the end October, the burning question was: How does Arteris move forward with their business after selling key network-on-chip technology (but not other customer royalties, retaining sales/marketing rights) and most of their engineering intellectual capital to Qualcomm?

I’ll just say that Qualcomm can be very persuasive, and it was probably the tech version of the “offer they can’t refuse” sans firearms but including bags of cash. Both Eric Esteve and Paul McLellan previously analyzed the highlights of this rather unique transfer agreement. Each touched on the notion that Arteris would likely take the cash and reinvest, adding new engineering talent in moving forward with their NoC vision.

Eric – Qualcomm-Arteris deal: high cost of differentiation
Paul – Qualcomm and Arteris: The CEO Speaks

Talent, indeed. Arteris has reached into the well and come out with three prominent names from a cross-section of the industry, filling key leadership positions.

  • Craig Forrest joins as CTO, and we only need one line in his bio to understand his fit: “… he led and created the semiconductor design groups responsible for the Application Processors in the first three generations of [Apple]iPhone products.” (Daniel Payne recently shared background on the A4, A5, and A6.)
  • David Parry catches on as VP of Engineering, bringing three decades of experience from Solarflare Communications and SGI. His bio cites extensive experience in cache coherency, and we can infer he also brings in-depth understanding of issues like QoS and high-bandwidth network traffic.
  • Benoit de Lescure is an interesting find, coming over as Director of Applications Engineering, lifted from the arms of Sonics where he held product management and applications engineering roles. He also brings perspective into Europe, with background including time at Thomson.

You can read more about these individuals, and thoughts from Arteris CEO Charlie Janac, in the official press release.

Arteris Recruits World-Class Engineering Leadership Team

The old saying that comes to mind with any organizational press release like this is “the proof is in the pudding”. The big news will be the next-generation Arteris NoC release, and subsequent design wins. However, they have kept their first (public) post-transform commitment, reaching into the likes of Apple and their key competitor for new engineering leadership and ideas. I suspect it won’t be too long before the next steps are seen.

lang: en_US


How to meet 3Ps in 3D-ICs with sub-20nm Dies?

How to meet 3Ps in 3D-ICs with sub-20nm Dies?
by Pawan Fangaria on 03-06-2014 at 1:30 am

It feels to be at the top of semiconductor technology by having dies with high density of semiconductor design at sub-20nm technology node stacked together into a 3D-IC to form a complete SoC which can accommodate billions of gates. However there are multiple factors to be looked at in order to make that successful amid often conflicting goals of power, performance and price. 3D-IC architecture provides large integration possibilities that can help meeting power and performance goals; however DvD (Dynamic Voltage Drop) hotspots need to be managed well along with full chip-package level analysis. Again emerging technologies such as reduced supply voltage (sub-1V), DVFS (Dynamic Voltage and Frequency Scaling), MTCMOS (Multi-Threshold CMOS) and LDO (Low-dropout voltage regulator) are being used to meet stringent low power demand while trying to meet performance and reliability goals. Reduced supply voltage (that reduces noise margin) along with high density of functionality and high operating speed induces significant noise into the system and that initiates the need to accurately model power/ground noise in the system to determine right levels of operating speed and voltage.

I was impressed with Apache’sRedHawk-3DX solution which is apt at handling these issues at sub-20nm and provides a right platform for power integrity analysis and sign-off for 3D-ICs that may accommodate billions of gates and operate at clock speed beyond 3 GHz. In order to accurately model and simulate power/ground noise, complete extraction of on-chip as well as package and PCB parasitics is done with due consideration to TSVs (Through Silicon Vias), interposer and micro-bumps. APL (Apache Power Library), on-chip inductance modeling, support for multi-port broadband S-parameter package/PCB netlists and EM modeling in RedHawk-3DX advances the accuracy levels at sub-20nm nodes.

Accuracy and coverage of dynamic power analysis has been enhanced by the novel idea of ‘event’ and ‘state’ propagation techniques which utilize both, functional stimulus and statistical probabilities to determine the switching scenario in the design. RTL2Gates methodology is realized by utilizing a fast event propagation engine and RTL VCD at RTL level without requiring gate level vectors. Critical cycles with peak power are identified by using RPM (RTL Power Model) which equip the logic engine to determine the switching state of the complete design for cycle accurate DvD analysis. A state propagation engine uses toggle activity (at primary IOs, register outputs etc.) as input and a smart ‘detection and pruning’ technique to eliminate the traditional problem of underestimating toggle rates in the logic cone. The predicted toggle rates are used by RedHawk-3DX to perform time-domain VectorLess analysis. RedHawk-3DX also supports a mixed-mode, where part of the blocks use RTL or gate level VCD as available and rest of the blocks remain VectorLess to derive switching activity, thus enabling designers to perform accurate full-chip dynamic power noise simulation.

Since the PDN (Power Distribution Network) is shared across the chip or complete package, full-chip capacity along with package/PCB model inclusion is a must for accuracy of simulation results. RedHawk-3DX has ERV (Extraction Reuse View), a unique hierarchical extraction and modeling technology that delivers full-chip capacity and performance without sacrificing the sign-off accuracy. Additionally, techniques such as MPR (Mesh Pattern Recognition) and multi-threading are used to reduce physical memory and run-time.

LDOs can be seen in most SoCs as they provide more robust power supply to noise-sensitive parts of the design. However, model of a chip with LDO must capture all key operating behaviors of the LDO, including the change in its output supply voltage for different load current scenarios. The expanded low-power simulation capabilities of RedHawk-3DX enable creation and use of such models in full-chip power noise analysis.


[3D-IC Voltage Drop using Multi-Pane GUI]

Designers can analyze 3D/2.5D designs either in concurrent mode which simulates full layout details for all dies including interposer, or in model-based mode which uses CPM (Chip Power Model) to capture the electrical signature (current and parasitics) for some chips whose layout may not be available. Design architects can explore various design configurations for early prototyping of such complex structures. The multi-tab, multi-pane GUI provides great flexibility in analyzing multi-die designs by simultaneously projecting DvD hotspots and other results of multiple dies in various combinations.

RedHawk Explorer helps designers to qualify the input data, review design weaknesses, debug specific hotspots and provide feedback for design improvements. It provides concise summary of various analysis results. With its powerful cross-probing capabilities designers can easily locate, isolate, understand and resolve various power integrity issues.

It’s interesting to see RedHawk’s comprehensive chip-package-system level analysis environment which can enable designers to do package-aware IC simulation as well as chip-aware package/PCB simulation. A detailed technical paper is available at Apache’s website here.

More Articles by Pawan Fangaria…..

lang: en_US


Calypto: the View From the Top

Calypto: the View From the Top
by Paul McLellan on 03-05-2014 at 10:37 pm

At DVCon today I talked to Sanjiv Kaul, the CEO of Calypto. Just as a reminder, Calypto have 3 products, SLEC (sequential logical equivalence checking, also called sequential formal verification), PowerPro (sequential RTL level power reduction) and Catapult High Level Synthesis (that they took over from Mentor in 2011 in a complicated deal involving stock, people, products and cash).

2013 was a good year for Calypto with record revenues, including Q4 being the best revenue quarter ever. They ended the year with lots of cash in the bank too, which is great for a startup. In a startup, as Gordon Bell used to say, “cash is more important than your mother.” 2014 is starting out well with lots of customer engagements in high-level synthesis. In the power area, the move to FinFETs makes dynamic power a bigger problem, and since everyone has already done the easy stuff, this drives PowerPro business. SLEC is a sort of complement to both products, basically checking that nothing screwed up. The company is over 100 people and they plan to grow headcount 15-20% in 2014.

I asked Sanjiv if they have access to the Oasys synthesis technology that Mentor acquired at the end of last year. He said they are discussing it. Mentor and Calypto are separate companies (although Mentor owns a controlling interest in Calypto I have heard) so nothing is automatic. But the attraction of using Calypto’s HLS along with Oasys’s very fast RTL synthesis offers the possibility of going straight from C to placed-gates. There is also some possible synergy in the power area. Since Sanjiv used to be on the board of Oasys and did some of their marketing he knows the technology well.

HLS (from Mentor) and PowerPro/SLEC (from the original Calypto) are roughly 50:50 in terms of business. PowerPro is growing the fastest since it is part of an established RTL methodology whereas HLS is a methodology change which always takes time. But all 3 product lines are growing. Sanjiv told me that they will have several announcements this year, mostly in the second half. The only hint I could get from him is that one is something to do with power.

This feels like the year that HLS is going to take off. I guess Cadence feel it too, having just acquired Forte. I happened to be in the press room earlier today and overheard part of a roundtable about HLS. Devadas Varma, now at Xilinx, and coincidentally the founder and initial CEO of Calypto (and who I worked with at Ambit) was one of the participants. He worked at AutoESL and they were acquired by Xilinx and that technology is now part of Xilinx’s Vivado suite (as is Oasys’s synthesis technology that Xilinx licensed). He pointed out that clock rates have peaked at about 3GHz. It is just not possible to go faster due to power. So to get more performance, more parallelism is required. The best tool for handling that is HLS since it can automatically parallelize as much as you want, unrolling loops, duplicating functional units and so on. That is driving use of HLS inside Xilinx.

Today, the sweet spot for HLS is video since it is very complicated, the standards change all the time, and people care more about throughput than the precise number of clock-cycles. Perfect for HLS to make tradeoffs and come up with excellent implementations. More and more of the blocks that differentiate an SoC are using HLS to get that differentiation since it is too hard without. As a result the RTL level designers are starting to adopt HLS. The results that HLS can produce have improved enormously in the last year or two, so if you looked at the technology a few years ago and decided it wasn’t quite ready yet then take another look.

Details on Calypto’s products are here.


More articles by Paul McLellan…


Automating PCB Timing Closure, Saving Up to 67%

Automating PCB Timing Closure, Saving Up to 67%
by Daniel Payne on 03-05-2014 at 10:10 am

The benefits of using EDA software is that it can automate a manual process, like PCB timing closure, saving you both time and engineering effort. This point was demonstrated today as Cadenceadded new timing-closure automation to their Allegroproduct family, calling it Allegro TimingVision. On Tuesday I spoke with Hemant Shah of Cadence by phone to learn more about timing closure of PCB designs.


PCB routing where each color shows different timing margins
Continue reading “Automating PCB Timing Closure, Saving Up to 67%”


450mm Delayed and Other SPIE News

450mm Delayed and Other SPIE News
by Scotten Jones on 03-04-2014 at 11:00 pm

Last week I attended the SPIE Advanced Technology Conference. There were a lot of interesting papers and as is always the case at these conferences, there was a lot of interesting things to learn from talking to other attendees on the conference floor.

The first interesting information from the conference floor was that 450mm is being pushed out. What I heard is that with low fab utilization and the empty fab 42 shell, Intel has pulled all of their resources off of 450mm. Intel was one of the key players pushing 450mm and the comments I heard were 450mm won’t be this decade with 2023 as the new introduction date for high volume manufacturing. Some equipment companies appear to be putting 450mm equipment development on hold.

I think it is fair to say the conference was generally negative on EUV:

TSMC presented a paper where they really called out ASML. TSMC showed a chart of roughly a decade of ASML source power roadmaps where each roadmap shows a huge increase in the source power in the near future, and yet the reality is that source power has come up very little. TSMC has also seen low uptime on the sources on the systems they have in house (~70%). TSMC has an NXE3100 and recently received a NXE3300. On the NXE3300 system they had a laser misalignment in the source and the CO2 laser that vaporizes the tin to make EUV light damaged other components requiring extensive repairs. The key issues with EUV at present are source power and reliability and mask defects, the lack of progress on source power is a huge issue.

SEMATECH presented a paper on what it will take to get to high numerical apertures for EUV systems. The current NXE3300 is a 0.33 NA system that can produce the 10nm Logic node with a single exposure. For the 7nm logic node further improvements should be able to maintain single exposure but at 5nm a significant improvement in NA is likely required. In the SEMATECH talks they reported that to get NA above 0.4 more mirrors are required in the optical path. EUV mirrors have non negligible absorbance of EUV light reducing throughput. Also the system may have to go to higher magnification with either smaller field sizes or the mask size will have to grow from the current 6” to 9” or 12”. Increasing mask size is a huge undertaking requiring retooling throughput the mask supply change.

There were many other papers on EUV with Shot noise and pellicles being other interesting topics but I thought these two really made clear the challenges of getting EUV into production and then scaling it further. EUV is already late enough that it will likely miss the 10nm logic node and so 7nm and 5nm performance really becomes a key.

Interestingly there seemed to be a lot of optimism that we can achieve the 10nm, 7nm and possibly 5nm nodes by combining multi patterning with other techniques. There were a lot of papers on novel multi patterning schemes and shrink technologies. New immersion systems discussed by ASML and Nikon are promising 250 wafers per hour reducing exposure costs. There is a lot of work being done on simplification of the schemes. There is also help from the design and process technology side with increasing use of gridded designs and 3D memory. In 2013 Samsung introduced a 3D NAND technology that achieved bits/cm2 density similar to 1x NAND using a roughly 50nm lithography technology. The ability to scale memory in third dimension is a kind of equivalent scaling technique that can continue “scaling” with less pressure on exposure systems.

The key on multi patterning with other techniques will be cost. Quadruple and even Octuple pattering with up to 5 and 9 cut masks respectively looks like very expensive solutions. My company is the world leader in semiconductor cost modeling and we are currently doing a lot of work evaluating the economics of the various options. As we get though our work on cost projections I will post some observations.

There were also a lot of papers on Self Directed Assembly, Nano Imprint, E beam and other alternative but generally these techniques strike me as a lot further away from volume manufacturing.

One last personal observation, this was my first time at Advanced Lithography. I was very surprised to learn you don’t get the proceedings until six week or more after the conference. At other conferences I attend you get the proceedings on a memory stick when you arrive. I find being able to read the papers before seeing them presented and then go back and check them again after seeing the paper key to understanding the content. At paper after paper I was furiously taking notes as the information flew by and only captured a fraction of it. If anyone from SPIE reads SemiWiki, this is in my opinion a huge issue with your conference.

lang: en_US