CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

IMEC Technology Forum (ITF) – EUV When, Not If

IMEC Technology Forum (ITF) – EUV When, Not If
by Scotten Jones on 05-28-2016 at 7:00 am

For me personally EUV has been something of a roller coaster ride over the last several years. I started out a strong believer in EUV but then at the SPIE Advanced Lithography Conference in 2014 TSMC gave a very negative assessment of EUV, and there was a SEMATECH paper on high NA EUV that struck me as extremely unlikely to succeed. I also had one of the most knowledgeable lithography experts I know telling me he didn’t think EUV would ever happen.

You can read my blog about the 2014 Advanced Lithography Conference here.

This year at the SPIE Advanced Lithography Conference the general assessment of EUV was much more optimistic. Source power is increasing and after many years of missed targets is following ASML’s projections, pellicles are making progress and photoresists are getting more sensitive and capable. There are several sites around the world exposing thousands of wafers with EUV and nothing drives learning like running large numbers of wafers. Probably the biggest remaining issue is Line Edge Roughness (LER).

I have published several blogs about the 2016 Advanced Lithography Conference here, here, hereand here.

Since the Advanced Lithography Conference I have heard that high source power EUV systems are damaging the masks. EUV masks are reflective and they absorb a significant amount of the incident EUV energy and could be subject to thermally induced strains particularly at high power. I should note here that I have asked several experts about this and they have all denied this is an issue.

This week I was at the IMEC Technology Forum (ITF), IMEC is a technology research center located in Belgium that is one of the premier semiconductor research centers in the world today. The ITF is a two-day event attended by approximately 1,000 people to showcase the work done by IMEC and their partners.

I have already published one blog about the ITF and I have more blogs in process but I wanted to focus on what I learned about EUV in this blog.

My first blog about the ITF is available here.

In the first talk at the ITF, Luc Van Den Hove, the president and CEO of IMEC said that we will need a cost effective lithography and EUV is the only solution they see. He is convinced it will succeed.

During a panel discussion a question was asked about EUV and Gary Patton, CTO and Senior Vice President at Global Foundries said we will see it by the end of the decade although we may not take full advantage of the technology. In the same panel An Steegen, Senior Vice President of process technology at IMEC said she thought we would see it much sooner although we may need line smoothing technologies.

As I have thought about what I heard this week I realized that the EUV we will likely get is very different from the EUV we originally expected.

The history of microlithography is one of periodic shrinks in wavelength as we produce smaller and smaller feature sizes. G-line lithography with a 436nm wavelength gave way to I-line at 365nm at the 800nm node, KrF at 248nm took over at the 350nm node, ArF at 193nm took over at the 90nm node and ArFi with an effective wavelength of approximately 131nm took over at the 40nm node. At the beginning of the development of EUV the idea was to transition to 13.5nm EUV as the next progression in microlithography and use it for all critical layers.

But then EUV was delayed and Multi-Patterning (MP) was developed to continue shrinking minimum dimensions until EUV is ready, except MP just keeps getting better and has proven to be particularly good at producing low LER values critical to the Front End Of Line (FEOL) transistor fabrication. So what does this mean for EUV? I think the bottom line is that the single exposure EUV for all critical layers’ expectation is no longer going to happen and what we are likely to get is a hybrid MP/EUV solution.

In an interview with Gary Patton that I will blog about later he said he thought EUV would be used for contact and via first and then metal layers.

If you look at contacts and vias the MP at 14nm is done with Litho-Etch Litho-Etch (LE2), then for 10nm Litho-Etch Litho-Etch Litho-Etch (LE3) with the prospect of LE4 at 7nm. EUV is generally believed to be intermediate in cost between LE2 and LE3 so for 10nm and 7nm processes a single EUV exposure for contacts and vias would be cost effective. Contacts and vias are also not very sensitive to LER which is the biggest weakness of EUV.

After contacts and vias critical metal layers are next. In the Front End Of Line at very small feature sizes Self Aligned Quadruple Patterning (SAQP) is used with cut masks. SAQP creates lines and spaces with a pitch down to 20nm and then cut masks cut the line ends. LER is excellent but for logic devices 4 to 5 cuts masks may be required for the 7nm node. For critical metal layers in the Back End Of Line the situation is even more complex. In BEOL applications block masks are used. In the FEOL with multiple cut masks you put on a cut mask, etch it into the hard mask, strip the cut mask, print a new cut mask and etch it into the hard mask over and over until you get the pattern you want in the hard mask. You then etch the hard mask pattern into the underlying film. In the BEOL you are creating trenches and you need block masks to block etching into the hard mask. The problem is that when you need multiple block masks the etch following the first block mask removes the hard mask from places where you want to apply additional blocks masks. The work around for this adds a lot of process complexity. With EUV you can do SAQP with a single EUV block mask and cut out a lot of process complexity. Since all you are doing is terminating the ends of the trenches you once again have an application that is less sensitive to LER.

Finally, EUV could be used in the FEOL to replace multiple cut masks and once again if it is only cutting the lines, LER is less of an issue.

There is another interesting aspect to EUV I hadn’t really thought about much before this week and that is the affect it has on cycle time. If you replace 3 or 4 cut or block masks or 2 or 3 LE3/LE4 masks and associated processing with a single exposure you save a lot of cycle time. During my interview with Gary Patton he mentioned that Global Foundries is using EUV in place of MP for non-transistor layers during their 7nm development to save time!

The sense I have now is that EUV will likely be implemented on a second generation 7nm technology and then with expanded use at 5nm, first at contact/vias, then for critical metal layers block masking and eventually it may be used for FEOL cut masking. For everything but contact/vias it will be paired up with SAQP to do blocks and cuts.

There is also the prospect that LER improvement for EUV could expand the application space. There is a belief that as you lower the EUV exposure dose Shot Noise creates LER and that there is a fundamental trade-off between dose and LER. I asked An Steegen this question and she said it is more complex than that and that there is a lot of research being done on understanding the fundamental causes of LER. It is also possible to implement post lithography line smoothing techniques. Simply put, etching tends to remove peaks from roughness and deposition tends to fill in valleys. There is the potential to develop combination etch/deposition technique that smooths LER.

The bottom line is that some of the leading process experts in the world believe EUV is coming before 2020, it just won’t be the single exposure all critical layer EUV we were expecting when EUV development started and MP is likely here to stay.

Also Read: IMEC Technology Forum (ITF) – IC Innovation


Moving chips from industrial to industrial IoT

Moving chips from industrial to industrial IoT
by Don Dingee on 05-27-2016 at 4:00 pm

IHS has put out its 1Q2016 Application Market Forecast predicting the highest growth rate segments for semiconductors over the next five years – and what was once old is new yet again. There it is, in the top right corner: industrial, projected to outpace even the automotive sector. Continue reading “Moving chips from industrial to industrial IoT”


How TSMC Tackles Variation at Advanced Nodes

How TSMC Tackles Variation at Advanced Nodes
by Pawan Fangaria on 05-27-2016 at 12:00 pm

The design community is always hungry for high-performance, low-power, and low-cost devices. There is emergence of FinFET and FDSOI technologies at ultra-low process nodes to provide high-performance and low-power requirements at lower die-size. However, these advanced process nodes are prone to new sources of variation. Moreover, cutting-edge designs with best PPA (Power, Performance, and Area) leave very less design margins.

In such a situation with high variation and low design margin, the designers have to struggle doing more variation analysis, thus impacting design schedule. To achieve a successful design closure in time, it’s important that the variation analysis tool must be robust to provide high performance, accuracy, and coverage.

In 2015, at 52[SUP]nd[/SUP] DAC, Cypress, Applied Micro Circuits, and Microsemi had presented their successful stories about dealing with variations in their designs. They used Solido’s Variation Designer which is scalable over large number of process variables and prioritizes simulations for most-likely-to-fail cases. I had blogged about this at that time, the link is provided below; the blog also contains the links to their video presentations.


Over the year, the Variation Designer is further improved in verification speed, accuracy, and coverage for leading-edge designs with high-performance, low-power, and low-voltage. The Variation Designer platform has a very efficient variation debugging environment. This is the next generation ‘Variation Designer 4’ from Solido.

Solido will be coming up with their new developments in this year’s 53[SUP]rd[/SUP] DAC as well, but before that I wanted to highlight how TSMCand Solido are collaborating to realize variation-aware designs at advanced process nodes.

TSMC and Solido are jointly offering the following free webinar

TSMC and Solido Collaborate for Variation-Aware Design of Memory and Standard Cell at Advanced Process Nodes

Abstract:
Variation effects have an increasing impact on advanced process nodes, and at each, new sources of variation must be considered. Furthermore, increased competition is forcing tighter design margins to make high-performance, low-power, low-cost products. Designers must now do more variation analysis than ever to achieve these tighter margins, using advanced variation-aware technology for speed, accuracy and coverage to deliver competitive chips on schedule. This webinar will discuss on how TSMC and Solido collaborate to offer variation-aware design techniques for memory and standard cell with TSMC advanced processes using Solido’s new Variation Designer 4.


Speakers Jacob Ou, Technical Marketing Manager at TSMC (on left) andKristopher Breen,VP Customer Applications at Solido

Date: June 1, 2016
Time: 10am Pacific
Duration: 55 minutes

Click here to register!

Also read: Moving with Purpose for Certainty


More Articles from Pawan


Google, Deep Reasoning and Natural Language Understanding

Google, Deep Reasoning and Natural Language Understanding
by Bernard Murphy on 05-27-2016 at 7:00 am

Understanding natural language is considered a hard problem in artificial intelligence. You could be forgiven for thinking this can’t be right – surely language recognition systems already have this problem mostly solved? If so, you might be confusing recognition with understanding – loosely, recognition is the phonology (for voice) and syntax part of the problem and understanding is the semantic part.

A lot of progress has been made in recognition and this is largely thanks to deep reasoning. Voice recognition is a natural for these methods – systems can be trained to recognize a voice or a range of voices then can, thanks to probabilistic weighting, recognize a pre-determined vocabulary with high accuracy. The same applies to text recognition trained for reading selected content (stories, web-content, etc).

The quality of recognition depends on a few things – a relevant vocabulary, a sufficient grammar and a method to resolve the ambiguities which are typical in natural language. A typical English speaker has a vocabulary of ~20k words – very manageable with a large-enough neural net, though most applications today work with a much smaller task-specific vocabulary (for example in voice commands for your car). Grammars on the other hand tend to be quite simple in most applications. They throw away most of what they see and look for a likely verb and object (assuming you are the subject) to decide what you want. There are much more capable systems like IBM’s Watson, but these have required massive investment to get to better recognition.

But now there’s a big assist to building equally capable systems, and that helps with the ambiguity problem. Google recently released Syntax Net (which runs on top of Tensor Flow) as an open-source syntax engine to recognize syntax structures in a text sentence. The release also includes an English language parser called Parsey McParseface identifying the syntax tree for a sentence, including relative clauses, and tagging parts of speech like nouns, verbs (including tense and mode), pronouns and more.

While the system works with text, it is also built on deep reasoning to handle ambiguity in sentence structure. An example given in the link below considers “Alice drove down the street in her car”. Sounds pretty simple to us, but a possible machine interpretation is that she drove down a street which is inside her car. Trained neural net processing helps resolve these ambiguities.

Based on training with carefully-labelled Washington Post newswire texts, the parser is able to come very close to human accuracy in structuring sentences. It doesn’t do quite as well with unlabeled text, especially web examples, showing there is still more research required in self-guided training.

Google’s goal in this release is to encourage wider research on the deeper problems in natural language understanding, for example completing parts of speech identification (identifying that this is the subject, not just a noun or pronoun) and the semantics. Syntax Net helps other researchers and commercial developers avoid needing to reinvent a solution to a solved problem (and presumably they can now be confident that Google will be sympathetic to fair-use claims for products based on this software :cool:).

A lot of the interesting semantic challenges revolve around ambiguity and context-awareness: “Everyone loves someone” (one fortunate person is loved by everyone or possibly many people are loved?) and “John kissed his wife and so did Tom” (Tom kissed John’s wife or his own wife?). These problems might also be amenable to deep reasoning (what is the most probable interpretation) but it’s not yet as clear how you would constrain training examples for specific applications.

Natural language processing is becoming a competitive frontier as personal assistant software and translation tools become more popular and as our expectation for accuracy in dictation continue to rise (who wouldn’t love to get rid of keyboards?). This is a domain worth watching. You can read more about the Google release HERE. And HERE is a Berkeley paper on training neural nets to recognize continuous speech with a 65k word lexicon.

More articles by Bernard…


How Microsoft Could Become A Mobile Player

How Microsoft Could Become A Mobile Player
by Patrick Moorhead on 05-26-2016 at 4:00 pm

Microsoft BUILD is the company’s annual developer conference where they communicate their latest strategies and deliverables to developers and launch many new innovations. BUILD is extremely developer- focused and is intended to inform current Microsoft developers as well as recruit more developers to develop for Microsoft platforms.
Continue reading “How Microsoft Could Become A Mobile Player”


How AT&T Will Turn on Car 2 Car Connectivity

How AT&T Will Turn on Car 2 Car Connectivity
by Roger C. Lanctot on 05-26-2016 at 12:00 pm

Cnet reports that, starting this week, AT&T is offering “Unlimited Plan” customers the option to add connected cars or a ZTE Mobley Wi-Fi plug-in device to their plan for $40/month for unlimited data – $10/month will buy 1GB. The plan applies to certain Buick, Audi, Chevrolet, Cadillac, GMC, Jaguar, Land Rover and Volvo vehicles equipped with AT&T telecom modules by their manufacturers.

Cnet goes on to note that AT&T stated in its first quarter earnings report that it had more than 8M cars on its network (in the U.S., more globally) and has connected more than 50% of all new connected passenger vehicles in the U.S. AT&T’s dominance of factory-fit or “embedded” connectivity in the U.S. puts the company in the unique position of being able to enable multiple connectivity options for drivers potentially integrating their smartphones, the embedded devices in their cars and/or aftermarket devices – in this case from ZTE and Audiovox.

The current offer is focused on delivering Wi-Fi for families to enable Internet access for children with mobile devices. Business use of Wi-Fi probably makes some sense as well. The Audiovox offering is skewed toward usage-based insurance and vehicle servicing.

With the new offer, AT&T is opening a window to possibilities that might boggle the minds of its car making customers and their car driving customers. Jaguar and Volvo, in particular, are interesting connected car partners for AT&T because both have been working on solutions to share data between cars. GM (Chevrolet, Cadillac, Buick, GMC) and Audi have been slow to embrace the emerging vehicle-to-vehicle communication opportunity but not these two European car makers.

In Volvo’s case, the company has been experimenting with sharing “black ice” notifications from one Volvo car to another via cloud connectivity over the embedded modem. In essence, Volvo cars will share their ABS, ESC or other sensor data when the car encounters an icy patch and notify following drivers via the LTE connection.


SOURCE: Volvo

Similarly, Jaguar is testing an application to notify following vehicles (presumably other similarly connected Jaguars) of potholes or other hazards on the road ahead. As in the Volvo case, Jaguar intends to use cloud connectivity via the LTE connection.


SOURCE: Jaguar Land Rover

What may be most attractive about these inter-vehicle communications is that they may also be shared with local traffic information authorities. But this isn’t just about road hazards, LTE connections are increasingly being used to communicate the signal phase and timing of traffic lights, variable message sign information and dynamic tolling data to enhance navigation guidance.

Given its unique role in the market, AT&T is in a privileged position to define a new path forward in vehicle connectivity combining smartphone applications with on-board data to enhance the driving experience. Mobile devices will serve as proxies for embedded connectivity where car companies are dragging their feet in embracing this new connectivity opportunity.

Notably absent from the new AT&T offer are AT&T customers Tesla, BMW, Nissan, Infiniti and Volkswagen. BMW’s absence is especially surprising given the companies embrace and advocacy of wireless cellular connectivity as an inter-vehicle and vehicle-to-infrastructure communication medium. AT&T will also soon be adding Chrysler, Mitsubishi, Ford, Porsche, Honda, Acura, and Subaru to its client roster in the U.S.

Like Vodafone in Europe, which dominates the market for embedded connections, AT&T has an unprecedented opportunity to lead the automotive industry forward in redefining vehicle connectivity to include communications between cars – including, maybe, cars from different manufacturers. It’s a tantalizing prospect and one that has not gone unnoticed by Google and Apple. (As an industry, we’re not really going to leave inter-vehicle communications to those guys, are we?)

It doesn’t hurt that AT&T has partnered with Ericsson to facilitate these cloud-based connections. The opportunities only become more fascinating as we prepare for 5G connectivity with its low latency and ability to convert every 5G-equipped vehicle into a hub. Sign me up.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/ac…e#.VuGdXfkrKUk


Banks and Retailers need to win in IOT

Banks and Retailers need to win in IOT
by Sudeep Kanjilal on 05-26-2016 at 7:00 am

In the runtime for the current mobile ecosystem – apps:

  • Average user has 21 apps on her smart phone, out of the total 1.5m apps on app-store
  • While apps account for more than half the time user spends on digital/smart platform, an average user spend more than 40% of that time on a single app
  • 2/3rd of all smartphone user did not download any new app last month
  • 25 most-used-apps did not feature a single bank or retailer app

The mainstream Fortune500 banks and retailers have totally lost the mobile ecosystem race. The key question to their CEO and board is – want to lose the next one (IOT) too?


The story of run-time…

All computing ecosystems are defined by their run-time, or, the user interface. Sure, we should, and will, take the ‘stack’ view. It’s the overall system that provides the full range of functionality and capability to the computing ecosystem. However, form, inevitability, follows function – and nothing defines function of the computing ecosystem as their runtime environment. That is the primary ‘constrain’.

We all know the standard chronology – Mainframe ecosystem, followed by PC ecosystem, followed by Web ecosystem, followed by the current mobile ecosystem. The runtime also evolved, and it shaped the ‘applications’ and the resulting capabilities of the system. What is often hidden from this view is that the productivity benefits are also defined (or constrained) by this runtime as well.

The next runtime, for the emerging new computing ecosystem – Internet Of Things – is also coming into view. And it will be BOTs!

BOTs are coming!

Why do we need a new runtime? Why can’t the runtime of our current mobile ecosystem – based on APIs – serve us for the next computing ecosystem?

The answer to that lies in the definition of IOT – A Network Infrastructure That Enables The Interaction Among Physical And Digital Capabilities

By definition, it is impossible for us to interact with 10 or 100 connecting ‘things’ – digital and physical – by using a user-initiated runtime. The only way to effectively do so will be a combination of Augmented Reality and AI-driven ‘command line’ – i.e., a bot

Now, tech leaders are already investing billions (yes, billions) into this. Remember, Facebook bought Oculus for $2B to own the new runtime/UI. It already uses AI techniques to identify people in photos, and to decide which status updates and ads to show to each user. Facebook is also pushing into AI-powered digital assistants and chatbot programs which interact with users via messages. And last week, it is opened up its Messenger service to broaden the range of chatbots.

Google is using AI techniques to improve its internet services and guide self-driving cars, and other industry giants are also investing heavily in AI driven bots. Microsoft also recently launched its own chatbots. And despite the embarrassment it faced when its chatbot turned racist, it also (ironically) demonstrated how powerful the deep-learning algorithm behind its chatbot really was, as it accurately picked up the filth (sadly abundant) on the internet.

The point behind collating all these public news here is to drive home a central fact – the new computing platform is coming – fast – and top tech leaders are working on the new runtime. So, will the history repeat itself (as with smartphone based ecosystem)?

How should the current leaders – Banks and Fortune 500 Retailers – position themselves?
Despite seriously lagging behind the tech giants/innovators in defining the AI standards, the banks and Fortune 500 retailers have an ace up their sleeves. Its called Context.

Put simply – if Context will be the king of bots, then Data is the king maker!
And who has the best data on consumers?

This race is for the banks and retailers to lose.

Also see: Designing Low-Power IoT Systems


ARM tests out TSMC 10FinFET – with two cores

ARM tests out TSMC 10FinFET – with two cores
by Don Dingee on 05-25-2016 at 4:00 pm

About 13 months ago, the leak blogs posted news of “Artemis” on an alleged ARM roadmap slide, supposedly a new 16FF ARM core positioned as the presumptive successor to the Cortex-A57. Now, we’re finding out what “Artemis” may actually be, inside a multi-core PPA test chip on TSMC 10FinFET. Continue reading “ARM tests out TSMC 10FinFET – with two cores”


Who protects power protection chips?

Who protects power protection chips?
by Tom Simon on 05-25-2016 at 12:00 pm

Power protection chips are widely used these days to protect sensitive circuitry from over-voltage and over-current stress. However, these workhorse chips are often subjected to extraordinary thermal stress themselves and need to be protected from burning up – literally.

Power protection chips work like electronic fuses, switching off supply lines when voltage or current passing through them becomes too high. They have several advantages over other types of fuses though. First off they can be reset automatically or at the control of external logic. Additionally, they can limit current or voltage to allow a sensitive device to continue to operate normally even if there are spikes in the supply line.

ON Semiconductor’s electronic fuses are used heavily for hot swap devices like USB ports, and SAS and SATA disk drives. Upon insertion there is usually a power surge that can damage controller circuitry. The ON Semi devices provide soft start so devices see their power supply presented in a manageable way.

Looking at the block diagram for an electronic fuse, we see that there is circuitry for over-temperature and over-current protection. Both of these need sensors to provide the raw input for each of these operations. While it is obvious that the thermal sensor needs to collect accurate thermal information, it is also the case that the current sensor, which is a replica device, needs to have to same operating conditions as the large power transistor that controls the output.

The problem with large power transistors is that there are significant temperature gradients across the surface. The temperature is determined by the joule heating and the device’s ability to dissipate that heat. However, the junction temperature in turn determines the electrical operation of the segments of the transistor. In other words, there is an interdependency between localized temperature and operating voltage and current.

As a result, the highest temperature in a power transistor can be hard to locate and predict without the right tools. The penalty for placing either sensor in the wrong location can be incorrect over-current protection, or even worse, complete and dramatic device thermal failure. The highest power dissipation occurs at the highest current and voltage operation. This is where the risk of device failure is most extreme.

On Semi has chosen to rely on the “Power Transistor Modeler – Electro-Thermal” (PTM-ET) tool from Magwel to pinpoint the hottest location within their power transistors.

PTM-ET reads in the layout and also a model for the junction device used in the power transistor. PTM-ET extracts the metal and poly structures to be able to fully model the current flow in the device. Then, user defined stimulus is applied over a time interval. PTM-ET uses advanced modeling methods to concurrently solve for transient electrical and thermal behavior. PTM-ET can also use surrounding thermal source and sink information to include an accurate view of the device including substrate, bond-wire, package and even board thermodynamics.

The end result is a 3-D visualization of the transient thermal and electrical performance of the entire device that can be graphically viewed or output in tabular report format. This can be used to determine the location and temperature of the device hotspot. With this information the designer can accurately place the over-temperature and replica devices in the optimal location.

There is a complicating twist however, once the device layout is altered to place the sense devices, the active area changes, resulting in movement of the hotspot. PTM-ET can be used again after sense device placement to ensure that they are placed as close as possible to the resultant hot spot(s).

PTM-ET is part of the suite of power transistor modeling tools from Magwel. The base product, PTM, is used to predict steady state Rdson and current density on the source/drain metal-poly network of large power devices. Also, PTM-TR can accurately predict non-uniform switching by fully modeling and simulating transient electrical behavior using the device metal-poly gate network and the active area device model.

Companies like ON Semiconductor rely on Magwel tools to solve challenging design problems so they can deliver high quality and high performance semiconductors. For more information about Magwel’s complete line of solutions for power transistor design, on-chip ESD simulation and power/ground network analysis tools, visit their website at www.magwel.com


IMEC Technology Forum (ITF) – IC Innovation

IMEC Technology Forum (ITF) – IC Innovation
by Scotten Jones on 05-25-2016 at 7:00 am

IMEC is a technology research center located in Belgium that is one of the premier semiconductor research centers in the world today. The IMEC Technology Forum (ITF) is a two-day event attended by approximately 1,000 people to showcase the work done by IMEC and their partners.

Luc Van Den Hove is the president and CEO of IMEC and he kicked off the ITF with a talk entitled “IC Innovation The Heartbeat of Yesterday, Today, Tomorrow. His talk gave a really interesting overview of the challenges and opportunities the semiconductor industry faces today.

We are now in the middle of the second decade of the century and it is a decade of disruption. Today Uber is the largest Taxi company in the world but it doesn’t own any Taxis, Facebook is the largest provider of content but it doesn’t produce any content and airbnb is the larger housing provider without owning any housing.

We are now on the eve of the Internet of Things (IoT) and IoT will disrupt everything. There will be billions of sensors providing data to tailor the environment.

In Integrated Circuit technology, scaling has historically given us, smaller, faster and cheaper with less power consumption. The trade-offs today are that scaling no longer provides all of the historical benefits. He is convinced that scaling will continue for a couple of more decades but Moore’s law will be different, it won’t just be dimensional scaling.

On the device technology front, FinFETs will transition to horizontal nanowires and then to vertical nanowires.

A cost effective lithography solution is needed and IMEC is convinced EUV is the only option, he is convinced it will succeed.

2D scaling will get harder and the time from node to node will get longer, we will need to make more use of the third dimension. For example, if you build a 3D SRAM cell once you have that building block you may be able to stack them up. SRAMs are very regular but then so are FPGAs and you can also build up standard cells.

Another opportunity is to do heterogeneous chip stacking so that each chip can be optimized for its portion of the work load. Combining Through Silicon Via (TSV) and interposers you can combine processing, memory and optical Input Output (I/O) together.

Magnetic spin based circuitry can create integration with less components than CMOS.

System innovation is also needed. To-date everything has been based on Von Neumann computing but we can evolve to Neuromorphic computing that is more like the human brain using fuzzy logic. Each neuron in the human brain is connected to 10 to 15,000 other neurons. Mimic the brains interconnection scheme using hardware. RRAM has synaptic like behavior.

Quantum computing is a long term option but he believes it is still several tens of years away. He is convinced a semiconductor platform will be needed to make it practical.

Systems and technology will need to both be co-optimized.

IC Technology will enable precision medicine. Today medicine is generic but in the future it will be tailored to the individual by genetic profiling. A DNA sequence has 6 billion characters. Targeting tumors based on DNA will allow more effective treatment but sequencing needs to be faster for it to be economical. They can sequence DNA for a few thousand dollars today. The short reads of the sequence require billions of reads that have to be reconstructed. They can now reconstruct a DNA sequence in a 2 hours instead of the 5 days it previously took. Sequencing is getting better faster than Moore’s law.

Automotive is evolving to smart connected cars for safety, inclusiveness and sustainability. To make this happen requires better sensors that are smarter and less expensive. They are integrating LIDAR onto silicon for an orders of magnitude reduction in cost. 70 GHz going to 140 GHz enable antenna on chip and better detection and identification. They are working on a full 360 degree image but it needs to combine a lot of data. This will require a lot of on-board processing power and automotive will need access to the latest technology. Cars will be the first implementation of robots in our life that will then spill over into daily service, health care and autonomous delivery.

We will need innovation in hardware and software, smarter sensors with – sensors, processing, storage and wireless communications all integrated together (computation is cheaper than bandwidth, process first to minimize transmission of data).

IoT will become the super brain of the world. We will need distributed intelligence to handle the data. Security and privacy will need to be implemented at a hardware levels and all of this will need to be tested in actual use.

In order to make this all happen all of the key players will need to work together. IMECs core is silicon technology but they are spreading out and leveraging silicon. Technology at the core needs to be surrounded by a system to deliver smart: health, mobility, cities, manufacturing, energy, media, government, etc. To this end they are merging iMind into IMEC bringing iMind’s application knowledge under the IMEC umbrella to accelerate IoT.

He believes that the chance that the next innovations will be built on semiconductor technology is pretty high just like the last fifty years. To bring about the next wave of innovation will require specific technologies optimized for the application, hardware-application and system-technology will all have to be co-optimized. IMEC is bringing together top applications partners, fabless companies, semiconductor companies and major suppliers to accomplish this. They are fully committed to develop the next wave of building blocks.