The emerging theme of fit-for-purpose IoT parts gained yet another perspective, this time with ARM and CEVA chiming in on a low-power Wi-Fi approach outlined in a new webinar. It was a rather unique event with an abbreviated 25-minute presentation and an extended 35-minute Q&A that added a lot of insight. Continue reading “Self-contained low power Wi-Fi IP for IoT apps”
Wearables Mean Continuous Growth in the Internet of Things Ecosystem
The Internet of Things encompasses a wide range of connected services, technologies, and hardware devices. Yet, for consumers, it is the growing number of portable and wearable devices that will be their main interface with IOT tech. The wearable device market is rapidly evolving, especially when it comes to smart watches and fitness monitoring devices.
Continue reading “Wearables Mean Continuous Growth in the Internet of Things Ecosystem”
“Thinking Outside the Chip”
While pushing Moore’s Law’s boundaries in the world of 2D packaging, companies are starting to explore nontraditional approaches towards designing integrated circuit chips. 2D packaging is currently the most used method in designing chips in the industry, and while it leads in efficiency of power and performance, it lacks in the utilization of space which is always a concern in the chip industry. 2.5D and 3D packaging capitalizes on the use of space, which increases the capacity of chips to hold more transistors per unit area without an increase in the cross-area of the chip. With this advantage, 2.5D and 3D packaging have the potential to jump 2D packaging in the future.
Traditional 2D packaging predominantly refers to System-on-Chip (SoC) and System-in-Packages(SiP). SoC is a device that contains a package that holds one die that contains multiple functions. Due to having only one die, SoC demand low power to be used and has fast circuit operation; however, it is viewed that it is very difficult to design and alter. The whole SoC has to be replaced to add a function for the SoC to perform and a complete replacement is needed if a function does not work correctly. SiP contains one wafer that is connected to multiple individual dice by flip-chip bumps which are essentially solder bumps. The multiple individual dice are on the same wafer. Unlike SoC, SiP are flexible to be altered since they contain individual dice for different functions. The speed of SiP is slower than SoC due to more connections from the individual dice, which increases the chances for failing.
2.5D packaging is very similar to 2D packaging, except that 2.5D packaging uses a silicon interposer to connect the dice to the wafer. Silicon interposer contains a substrate that has metallic components on the sides. It uses through-silicon vias(TSVs) as tunnels to connect metallic sides of the silicon interposer. The dice are connected to the interposer with micro-bumps instead of the larger flip-chip bumps, and the silicon interposer is connected to the substrate with flip-chip bumps. Since TSVs use direct connections, 2.5D use less power to communicate with different components. The silicon interposer also limits the space needed for the use of rails. Adding the silicon interposer introduces additional cost and difficulty to designing and testing.
3D packaging involves the use of multiple dice stacked on top of each other using TSVs to connect the individual dice and the wafer. By using TSVs, the dice are able to interact with each other and the wafer. Due to the thin nature of the TSVs used, 3D packaging utilizes efficient use of space that is used to increase the capacity of the chip for containing more transistors per unit area compared to 2D packaging. The use of TSVs also leads to efficient communications between the die and better performance in terms of power since less power is needed for transmitting signals. Due to dice being stacked upon each other, heat dissipation is one of the major issues with 3D packaging. When the dice are stacked, high temperatures can cause the dice to melt. Additional problems involve the cost of testing since all current chip testing mechanisms are for 2D. The additional costs involved will lead to an increase in the price of the chips which will be divergence from the long trend of cost reduction in the chip industry.
Even though the 2D packaging is the main design being used in manufacturing, the 2.5D and 3D will eventually pass 2D packaging. The boundaries of Moore’s Law dealing with 2D packaging will soon be reached; therefore, 2.5D and 3D will be the future to increasing the amount of transistors per area. Since 3D packaging incorporates more dice per cross-area than 2D packaging, 3D will be the main leader for designs in the future. Despite the performance enhancements that these new packaging approaches bring, there are economic, and technical challenges that need to be navigated through before wide scale implementation in the market.
The economic challenges come from the added costs involved in making these new designs. The new designs involve die integration which costs more. The current existing testing structures are not suitable for the new designs. Although 2.5D design can use most of the existing 2D testing structures, 3D will require a complete overhaul which will be an added cost to the chip industry. The additional production cost will translate into higher prices for chips which is against the tradition of producing cheaper chips in the chip industry.[4] Since the dies are stacked upon each other, heat dissipation, which will causes the dices to melt, is a main issue. Until companies develop a new design that deals with the heat and the rising production costs involved, 3D packaging will continue to lag behind 2D and 2.5D packaging.
By Demba Komma and James Grantham
Article in question for reference:
[1]Sperling, Ed. “Thinking Outside The Chip.” Semiconductor Engineering. N.p., 14 Jan. 2016. Web. 23 Feb. 2016. .
References:
[2]Santarini, Mike. “2.5D ICs Are More than a Stepping Stone to 3D ICs | EE Times.” EETimes. N.p., 27 Mar. 2012. Web. 19 Feb. 2016. <http://www.eetimes.com/document.asp?doc_id=1279514>.
[3] Maxfield, Clive. “2D vs. 2.5D vs. 3D ICs 101 | EE Times.” EETimes. N.p., 8 Apr. 2012. Web. 19 Feb. 2016. .
[4]Bailey, Brian. “When Will 2.5D Cut Costs?” Semiconductor Engineering. N.p., 7 Aug. 2014. Web. 19 Feb. 2016. <http://semiengineering.com/will-2-5d-reduce-costs/>.
EUV is coming but will we need it?
I have written multiple articles about this year’s SPIE Advanced Lithography Conference describing all of the progress EUV has made in the last year. Source power is improving, photoresists are getting faster, prototype pellicles are in testing, multiple sites around the world are exposing wafers by the thousands and more. The current thinking is that EUV will be ready for production around 2018. All of this is very promising but while we have been waiting for EUV the industry has been moving on and a possible scenario is emerging where by the time EUV is available it won’t be very useful. In the balance of this article I will lay out a possible scenario where changes in device structures and fabrication processes could make EUV largely unnecessary.
My Advanced Lithography Articles summarizing the recent progress of EUV are available here:
- TSMC and Intel on the Long Road to EUV
- ASML and IMEC EUV Process
- Intel EUV Photoresist Progress and ASML High NA EUV
There are three major product categories that drive capital equipment purchases in the semiconductor industry today, NAND Flash, DRAM and Logic.
For many years NAND Flash drove the requirement for the latest lithography tools. 2D NAND Flash devices went through lithography shrinks yearly eventually reaching 16nm devices manufactured in high volume with Self Aligned Quadruple Patterning (SAQP), but difficulties with 2D NAND device scaling and the cost of the complex patterning schemes required have brought 2D NAND scaling to an end. Specifically, adjacent cell interference, control to floating gate coupling and the shrinking number of electrons in a cell are just some of the device related issues. The solution to this issue for NAND has been the move to 3D. 3D NAND creates strings of NAND cells vertically with the cells created by alternating layers of material deposited using CVD techniques. The lithography requirements for 3D NAND are relaxed, for example Samsung’s 32-layer part has only one double patterned layer. Scaling is accomplished by adding layers, not by shrinking the photolithography defined dimensions. It is expected that scaling to >100 layers will yield devices with over 1Tb of capacity. 3D NAND has therefore made EUV unnecessary for NAND.
DRAM has followed a path similar to 2D NAND with yearly shrinks and the use of complex multi-patterning schemes. Recently DRAM scaling has slowed due to device scaling issues. DRAM stores values as a charge or absence of charge on a capacitor fabricated in series with an access transistor that controls the capacitor. Access transistors need a relatively long channel length to minimize leakage. This has led to a variety of access transistor structures such as RCAT, SRCAT and Saddle fin. The next step in access transistor scaling is expected to be VCAT but to-date fabrication of the vertical VCAT has been difficult to achieve. In parallel to this the DRAM capacitors need to scale down in horizontal area while maintaining a minimum acceptable capacitance value. Capacitor scaling to-date has involved vertical structures, rough surfaces and high-k dielectrics. Further vertical scaling has been limited by mechanical issues. and there is also a fundamental trade-off between the dielectric constant (k) of a material and band gap. As k increases the band gap decreases leading to leakage problems. Achieving acceptable leakage through the capacitor constrains the materials that can be used. There are some options still available, for example bit line optimization may allow smaller capacitance values to be used and there are rumors of a new film. At present the device scaling issues have moved DRAM away from being a leading candidate for EUV usage. DRAM also appears to be a leading area of Directed Self Assembly (DSA) research.
Longer term a DRAM alternative is needed. Conventional wisdom is that STT MRAM will eventually replace DRAM. To-date MRAM density and therefore cost is not competitive with DRAM (and there are other developmental issues). MRAM cells are fabricated in the metal layers over logic devices opening up the possibility to move to some kind of 3D Structure, possibly similar to the recently disclosed 3D XPoint memory (more on that later).
In the logic space the leading companies, Intel, TSMC, Samsung and Global Foundries are all in production of 16nm/14nm FinFETs. 10nm is expected to start to enter use in late 2016 at the foundries and in late 2017 at Intel. TSMC is currently forecasting that 7nm will be available in late 2017. TSMC is guiding that they will “exercise” EUV at 10nm for 5nm use. Intel is leaving the door open on EUV use at 7nm and assuming they don’t produce 7nm until 2019 or later that would make sense. Global Foundries has said they are developing 7nm based on what they can reasonably do without EUV and EUV would be a possible second generation 7nm cost reduction. All of this lines EUV up for a projected late 7nm node or 5nm node insertion.
Against this backdrop it is interesting to look at the evolution of logic devices. Intel introduced FinFETs at 22nm, shrunk them for their second generation at 14nm and they are guiding that at 10nm the third generation FinFETs will not have new materials. 16nm/14nm at the foundries was the first generation FinFET for all of them, 10nm will be the second generation and 7nm the third generation FinFETs for them (we should note here that from a pitch perspective the foundries 7nm “node” is similar to Intel’s 10nm node). At one time I thought we might start to see FinFETs with high mobility channels by 7nm or possibly even 10nm but due to a variety of challenges achieving high performance with high mobility channels in actual devices and the challenge of changing an existing structure to a new material I am now thinking FinFETs will likely stay with silicon channels until they are replaced by a new device. This leads to the question of when we might see a new devices and what it might look like.
IMEC is one of, if not the leading semiconductor technology research institution in the world. IMEC appears to be settling in on stacked horizontal nanowires as the successor to FinFETs. The devices experts I talk to are also optimistic on this approach. Horizontal nanowires are fabricated by depositing a stack of alternating materials using CVD techniques and then pattering them. This technique can create a stack of multiple nanowires. One really intriguing possibility is for example to create a 4 nanowire stack where 2 wires are NMOS and 2 are PMOS. This would yield a stacked CMOS devices and be equivalent to a node or more of scaling without shrinking the lithographic dimensions. If you take this idea a step further to 8 stacked wires you could have a stack of two CMOS pairs. You could also look at stacking layers while relaxing the horizontal width to scale the device density while taking the pressure off of lithography to provide shrinks. This would be analogous to what has been done with 3D NAND.
Of course we also need to look at when this might happen. My best guess is around 5nm at least for the foundries. With the foundries lining up to not use EUV at 7nm or only late in 7nm, if a 5nm solution emerges that doesn’t need EUV how much of a EUV investment are they likely to make. For Intel I am thinking horizontal nanowires might be a 7nm solution but with Intel now on a 3-year node cadence that would put Intel’s 7nm node at around 2020 likely around when the foundries would be introducing their 5nm nodes.
The picture all this paints is that NAND no longer drives the need for EUV by going to a 3D structure and logic also has the potential to move to a 3D structure with relaxed requirements. DRAM scaling has slowed due to device scaling issues and is a leading DSA candidate, so what will drive the need for EUV?
Intel and Micron recently introduced their 3D XPoint memory architecture. Faster and with better endurance than NAND and cheaper than DRAM, 3D XPoint is positioned to be used as Storage Class Memory – a kind of buffer between DRAM main memory and non-volatile storage such as NAND and hard disc drives. The first 3D XPoint memory has 2 memory layers fabricated in the interconnect stack over a logic circuit that controls the memory. We estimate the memory layers take 2 mask layers each and are a 25nm technology requiring multipattering for each layer. 3D XPoint scaling offers the ability to scale by adding layers and also by shrinking the memory layer pitch. If 3D XPoint is scaled simply by adding memory layers EUV might not be interesting. If 3D XPoint were to begin scaling pitch, EUV would become attractive. With 3D XPoint not expected to be in production until 2017 and then needing to become established in the market it is hard to envision 3D XPoint successfully driving EUV adoption.
This is of course just one possible scenario for the direction of semiconductor technology but clearly while we have been waiting for EUV the industry has been moving forward on other fronts. Multipattering also continues to get better and cheaper. By 2018 when EUV is currently projected to be ready for production it is possible the evolution of semiconductor devices may make it unnecessary.
Making PLM Actually Work for for IC Design
The topic of Product Lifecycle Management (PLM) conjures up images of usage on airplanes, tanks and cars. That’s because it was developed decades ago to help make product development and delivery more efficient for big expensive manufactured products. It worked well for its intended markets by combining and managing all the phases of product development, parts procurement and manufacture. Unfortunately, while the concept is sound, there has been little feasible success implementing classic PLM systems for IC design.
There are several significant reasons that PLM has not gained traction in the IC design space. Traditionally PLM systems are applied by taking a relatively static design and manufacturing process and building an extensive set of customizations and specially tailored code to handle that one specific case. As we know IC design is changing at every node, and even at existing nodes, flows and tools always being updated. As a result, rather than setting up a system and using it continuously, IC design requires adaptability in PLM systems.
Another big difference in IC design is how semiconductor IP’s are really hierarchically self contained designs themselves. So rather than taking a flat bill of materials from suppliers or internal sources and assembling them in to a finished product, IC’s have layer upon layer of blocks that are each themselves potentially composed of smaller IP blocks. The requirement for semiconductor PLM’s is to manage all the design and verification steps at each level as information is moved from development to utilization.
The data we are talking about includes technology files, tool versions, quality metrics, constraints specifications, dependencies, etc. Also access control and release management, and a number of other features are necessary. In fact, Methodics has compiled a list of all the properties that are needed in each base object in an IP PLM system.
Methodics is well versed on this topic because they have developed a PLM system specifically tailored to semiconductor design. It uses their ProjectIC design management system as its foundation. In turn ProjectIC is built upon industry standard revision control systems such as Perforce, GIT or Subersion, used in their native form. The real question, however, is what are the steps to connect Methodics’ IP Lifecycle Management (IPLM) system to a semiconductor design project and all of its potentially hierarchical IP components.
Fortunately, Methodics has written up a white paper that covers the fundamentals and also the integration points for their IPLM system. The process starts with customers adding in meta data for the IP that they wish to include. This can be run as a batch operation once the specific fields desired have been defined. There is some discretion here as to what to include, but the flexibility allows customers to attach whatever metadata they deem important for each IP block. It is also easy to update or modify these definitions.
Next is the process of importing existing IP into workspaces so they can be worked on and released to other users and teams. Now, IP can be changed and worked on in a systematic fashion. Also any tool run results can be captured and saved. This might include P&R results, or the output from verification runs such as DRC, simulation, etc. All this information is maintained with the IP for future reference and use.
At this point it is possible using Methodics’ IPLM system to create releases for the IP users who depend on the IP. As downstream users integrate these IP releases into their own designs, data about where the IP is used is saved. This means that it is possible to determine where specific IP is used.
Other metadata can be added back into the IPLM system from downstream users and external sources. Custom metadata can be created using the ProjectIC API’s. These comprehensive API’s are well documented and make it easy to create custom scripts to provide richer data on IP implementation and deployment within an enterprise.
The Methodics white paper goes into much deeper detail than we have space for here. If you are interested in how PLM can realistically be applied to semiconductor design, reading it is highly recommended. A copy is available through their website.
Multiple Facets of Cyber Security Workshop!
Security is one of the new categories we track and it is keeping SemiWiki very busy. Security in itself, as a result of the FBI vs Apple comedy routine, but also security across the EDA, IP, ARM, Mobile, IoT, and Automotive categories.
Continue reading “Multiple Facets of Cyber Security Workshop!”
2.5D supply chain takes HBM over the wall
SoC designers have hit the memory wall head on. Although most SoCs address a relatively small memory capacity compared with PC and server chips, memory power consumption and bandwidth are struggling to keep up with processing and content expectations. A recent webinar looks at HBM as a possible solution.
Continue reading “2.5D supply chain takes HBM over the wall”
Neural Networks Poised to Make Big Changes in Our World
Probably the most interesting thing about Neural Networks is how they can be used for complex recognition tasks that we as people can easily perform but we might have a lot of trouble explaining how. One very good example of a problem that Neural Networks can tackle is determining when people are making a fake smile. Intuitively we know how to do this, but we would be hard pressed to explain the process we use to do it.
Neural Networks are being used for facial recognition, medical diagnosis, autonomous vehicles, and more. The list of applications is limitless, and the best part is that problems can be thrown at Neural Networks without having to map out a specific solution. Instead of hard coded programs that can do one specific task and no other, we can build a Neural Network and train it over and over again to do whatever tasks we want it to perform.
The power and potential of Neural Networks has not gone unnoticed by the major players in software and hardware. At the CDNLive event in Silicon Valley last week, Cadence CEO Lip-Bu Tan’s keynote talk featured Neural Networks. A few months ago Cadence hosted an event specifically targeted at Embedded Neural Networks. While at first glance using Neural Networks in an embedded environment sounds far fetched, the reality is that with today’s technology the training phase can be executed on servers and the coefficients for the task at hand can be downloaded to run the recognition process on an embedded platform.
It is worth noting that Google and Nvidia were represented among the speakers at the Cadence Embedded Neural Network Summit in February. However, I found one of the most interesting talks was by Sumit Sanyal founder and CEO of Minds.ai. He emphasized that the ‘new’ binaries will be the training weights for Neural Networks. The training process is lengthy, but his company and others are working to shorten it. In addition, they are looking to create the smallest training weights so they can be used on almost any platform.
Instead of larger word sizes and floating point numbers, training weights can be efficiently expressed with 8 bit values. This also leverages the existing compute infrastructure. If for example we wanted to go even smaller to 4 bit values, this would cause extra work for hardware that was designed for larger word sizes. Parallelism is also hugely valuable in this space. An overlap of only one pixel is needed in the data used for training, allowing larger training problems to be broken up and solved in parallel.
Astoundingly Neural Networks are significantly more accurate than conventional coding approaches for the recognition problems they have been used for. Accuracy percentages for facial recognition are in the high 90’s. Let’s talk about one specific benchmark for Neural Networks – the German Traffic Sign Recognition Benchmark (GTSRB). It consists of 51,840 images of German road signs, which are divided into 43 classes. The image sizes range from 15 pixels on a side up to 222 by 193 pixels. The two main metrics for this benchmark are accuracy of recognition and the size of the training weights used for recognition.
Samer Hijazi of Cadence presented some of their work with Neural Networks and talked about results in the GTSRB. They aggressively reduced the size of the training weights by combining layers that were used in the processing. They also reduced the size of each layer using numerical methods. Lastly they applied a hierarchical approach to the recognition problem. Using these methods, they were able to provide an extremely high recognition accuracy of 99.8%, and a smaller number of MAC’s per frame than the previous best result by over one order of magnitude.
Given the wide range of applications and the soon to be widespread ability to train and then use Neural Networks in mobile and embedded platforms, we can expect to see huge advances in almost every computational domain. We are seeing hints of this with autonomous cars, and many other areas. We live in a visual world, and computers are now for the first time learning to see the way we do and give us back meaningful information. The same goes for sound, any other sensor input and big data for that matter. Think of medicine (radiology, tumor detection, etc.), geology with images from space, or physics with data from particle colliders. Manufacturing and quality control are other examples of areas that stand to be revolutionized. For more information on Cadence Tensilica technology which is used to build Neural Networks you can look here.
Custom Layout Productivity Gets a Boost
In the 1970’s, when Moore’s Law was still in its infancy, Bill Lattin from Intel published a landmark paper [1]. In it he identified the need for new design tools and methods to improve layout productivity, which he defined as the drawn and verified number of transistors per day per layout designer. He said existing solutions would simply not keep up with the fabrication process roadmap.
Continue reading “Custom Layout Productivity Gets a Boost”
Silicon Photonics – Back to the Future – Part Deux?
I cut my teeth in silicon IC design at Texas Instruments during the early 1980’s working on what would eventually become the ASIC and Fabless IC industries that enabled the explosive growth of the electronics industry over the last three decades. Of late I’ve become involved in the silicon photonics space and I am getting an incredible sense of Deja vu. I’ve seen this movie before.
Silicon photonic design is at the same stage IC design was in the late 1970s. Most photonic IC (PIC) work is still taking place in labs with the few production parts coming from well-funded IDMs like Intel, IBM, ST Micro etc. The focus is mostly at the technology level figuring out how to make better devices. Design is still being done bottom up, not top down (e.g. layout a device, run TCAD simulations, fab some parts and see what happens). The question is how long will be it before silicon photonics really takes off and will it ever be as pervasive as electronics are today.
The good news for nascent technologies such as silicon photonics is that we are at a different starting point now then we were 30 years ago. IC Design methodologies have been codified and we have well understood business models and a mature ecosystem of specialized suppliers for CAD, fabrication, packaging and test. Yet, this very infrastructure could be the thing that holds silicon photonics back.
Case in point, how many big pure play foundries support silicon photonic processes? Zero. Why? The simple answer is that the ecosystem has not only matured but it has been highly optimized for revenue and profit. Projected wafer demand for silicon photonics over the next couple of years is still in the 1000’s of wafers per year as compared to tens of thousands of CMOS wafers per month per fab in the electronics industry. The opportunity costs for running a photonics line in a pure play fab are very high. This however could change soon with the push towards “all optical” data centers and the introduction of embedded optics solutions. Yole’ Developpement predicts the silicon photonics market will grow at a CAGR of 38% from $25M in 2013 to $700M by 2024 (Yole’). They expect an inflection point in 2018 driven by four different applications: HPC, all-optical data centers, telecoms and sensors.
For now, research fabs such as imec, CEA-Leti, IME/A*STAR and a few others are carrying the load making multi-project wafer (MPW) runs available for companies looking to test their ideas. Most of these fabs are in Europe with exception of IME/A*STAR in Singapore. Recently the United States announced the formation of AIM (the American Institute for Manufacturing Integrated Photonics) to support silicon photonics by creating a National PIC manufacturing infrastructure center in the U.S. This is still in very early stages. Silicon photonic MPW services from the research fabs run on the order of ~$2K / mm[SUP]2[/SUP] as compared to ~$1K / mm[SUP]2[/SUP] for 0.13um CMOS logic. While the margins are higher, the low wafer volumes have not been attractive for commercial foundries to provide such MPW services. A good explanation of the current silicon photonics ecosystem can be found in a paper by Andy Eu-Jin Lim and team of IME covered in chapter 6 of the book Silicon Photonics III (Foundry Model Discussion).
There has been some movement on the fabless side with a collaboration started in 2011 between IME/A*STAR, GLOBALFOUNDRIES and Alcatel-Lucent to transfer the IME 25G silicon photonics platform to GF’s 200mm 0.18um CMOS foundry line. It is anticipated that their costs per mm[SUP]2[/SUP] for a Si PIC will be significantly less than for the research fabs although it’s not clear yet how that will translate to pricing and the capability has yet to come to market. At the same time, several technical differences between electronic and photonic processes have pushed the industry towards non-monolithic solutions. CMOS SOI is optimized for transistor performance while photonics SOI uses a much thicker buried oxide for optical loss reduction. Additionally, as dimensions of the electronics shrink there is a larger discrepancy between device dimensions of transistors (< 100nm) and photonic devices (0.1-1um). These differences plus the lack of a good light source on silicon have pushed the industry to look to hybrid electronic / photonic solutions that combine separate electronic and photonic die in a common package using 2.5D / 3D techniques. This allows for decoupling of technology nodes, substrate types, and wafer sizes while still enabling the use of CMOS compatible process equipment without the need to integrate two different process flows.
So, it appears that while we still have a journey ahead of us that momentum is growing for silicon photonics. I doubt I’ll see flying DeLorean cars in my lifetime but silicon photonics may let me relive again my journey through the birth of modern-day electronics.

