Banner 800x100 0810

GSA Entrepreneurship Conference

GSA Entrepreneurship Conference
by Paul McLellan on 06-10-2013 at 12:04 am

GSA’s next event is the annual Entrepreneurship Conference to be held at the Computer History Museum on July 18th. The event runs from 3pm to 8pm. Attendance is free but you must register here.

The event consists of 5 panel sessions followed by a reception. The full roster of who will be on each panel is not completely finalized yet, but here is the current status:

  • Panel 1: A panel discussion from leading analysts regarding the semiconductor outlook for 2013 and beyond. Dan Niles (who does a regular quarterly report for GSA will attend).
  • Panel 2: Fueling Success and Innovation: A look at existing and alternative semiconductor funding models that are fueling innovation, spurring investment, and mitigating risk. Shankar Chandrun from Samsung Catalyst fund and Angel Orrantia from SK Telecom Innovation Center. I blogged about this capital-lite funding model here.
  • Panel 3: Enabling Today’s Start-ups: How the ecosystem is helping start-ups secure their operational success by reducing initial costs and infrastructure requirements. Mike Noonen of GlobalFoundries, Bruce Jewett of Synopsys, Brad Paulsen of TSMC and Geoff Ribar from Cadence are the panel. Len Pernham from MoSys is the moderator.
  • Panel 4: Exits, Finding Success in Semiconductor Start-ups: A panel consisting of various VC’s, Bankers and investment firms that discuss the key elements required for a successful semiconductor exit in today’s environment. Stanley Pierson from Pillsbury is one of the panelists.
  • Panel 5: Success Stories in Funding, IPOs and M&As: This panel will highlight lessons learned from individuals that have successfully completed funding and exits and address the challenges in today’s environment that entrepreneurs must navigate in order to ensure success. Panelists are Phil Delansay of Aquantia, Paul Russo of Geo Semiconductor and Dennis Segers of Tabula. Ralph Schmitt of OCZ is the moderator.

Full details of the panels including the panelists as they get signed up are on the GSA website here. The event wraps up with a reception at 7.30pm.


…And Now Intel Will Make a Turn Towards Memories as it Plans to Capture Samsung

…And Now Intel Will Make a Turn Towards Memories as it Plans to Capture Samsung
by Ed McKernan on 06-09-2013 at 10:00 pm

While eyes remain fixated on the architectural battle between Intel and ARM, a second front is soon about to open up that will determine mobile supremacy for the rest of the decade. Whereas yesterday’s story on the collapse of Wintel and the anointing of Google and Samsung are repeated endlessly, now tables are being set for a significant turning. Thanks to Intel introducing a very competitive Silvermont architecture and the reality that a process gap is growing vis-à-vis Samsung’s Foundry, a new hierarchy is forming that will allow the x86 monopoly to extend its power by pulling in DRAM and NAND into an extended SOC family of solutions. Shortages and rising prices Continue reading “…And Now Intel Will Make a Turn Towards Memories as it Plans to Capture Samsung”


GPU vs. FPGA

GPU vs. FPGA
by Luke Miller on 06-09-2013 at 9:00 pm

I just don’t understand it? My kids love surprises but I have yet to find management that does, go figure but boy during a review they can really spring them on ya! What surprises me is the absurdness of my title, GPU vs. FPGA. FPGAs are not GPUs and likewise but none the less there is the push to make a fit where nature does not allow. I liken the FPGA to the Ferrari and the GPU to Big Foot. Remember that, going to the Aud in your town and watching that monster truck crush them cars. I never regained my hearing. Vinny Boombots is still saying the suit will close any day now.

The GPU back in the day was just a Graphical Processing Unit. Today thanks to CUDA and OpenCL, they can be programmed to be massively parallel. Why does the word parallel always proceed with massively nowadays? Anyways, we see the benchmarks, take a NVIDIA Fermi and all its cores and unroll your 262k point FFT and get er done in 9us. Not really, we forgot the memory overhead, which is roughly another 60ms. An Old Virtex-5 does the same FFT in 2.6ms. The FPGA used about 15 Watts and the GPU roughly 130 watts. Not that I’m a green fella and all that; but for heavy DSP processing where perhaps these things have a SWaP requirement, the GPU is a tough cooling challenge and awfully hungry.

Who’s in control? That’s like telling someone to settle down when they are spun up. Try it; you’ll be in for some laughs. Unless you are using the Intel Sandy Bridge, and I’m not sure you can run GPGPU there but let’s just say you could, other GPUs need to be controlled by a CPU. More watts and overhead. Now the one die CPU/GPU is a great idea but the FPGA performing DSP processing does not need a CPU per say to control it. Even if it did, the Xilinx Zynq is a nice fit with the dual ARMs to handle the out of band calculations right into the same FPGA. What about IO, the FPGA can support anything you can think of, as for real-time data processing we are not using TCP/IP or even UDP, think low latency here.

The take away is that they really are two different beasts and it is complicated to make a fit where it does not belong. What has happened though is the open community thanks to CUDA has allowed the littlest of nerds to play with GPU processing and the community has come to this conclusion, it is cool. And it really is, but when you have requirements to meet and processing that needs low latency and the answer at the same time every time (deterministic) the FPGA will be your choice. Now perhaps you have a non-real time system and need to play back lots, and lots of data for hardware in the loop acceleration, then the GPU may be your answer. My point is get your head out of the buzz word bingo and sift through all the marketing propaganda. Make the right decision and design the best system for your customer and make your stockholders happy…. Have fun…

lang: en_US


SoC Sign-off, Real Intent at DAC

SoC Sign-off, Real Intent at DAC
by Daniel Payne on 06-09-2013 at 8:10 pm

Monday morning at DAC I met with Real Intent to get an update on their SoC sign-off tools:

  • Dr. Prakash Narain, President and CEO
  • Graham Bell, Sr. Dir. Mktg.

Years ago Prakash was at IBM the only two years that they attended DAC, in an attempt to offer their internal EDA tools to the EDA marketplace. Graham worked at Nassda marketing the HSIM hierarchical FastSPICE simulator, competing against me with Mentor’s Mach TA simulator (HSIM won, big time).


​Dr. Prakash Narain, Graham Bell
Continue reading “SoC Sign-off, Real Intent at DAC”


IC Design for Implantable Devices Treating Epilepsy

IC Design for Implantable Devices Treating Epilepsy
by Daniel Payne on 06-09-2013 at 8:05 pm

I’m utterly amazed at how IC-based products are improving our quality of life by implantable devices. The modern day pacemaker has given people added years of life by electrically stimulating the heart. A privately-held company called NeuroPace was founded in Mountain View, California to treat epilepsy by using responsive neurostimulation. Their first product is called the RNS System (Responsive NeuroStimulation), and it is a programmable, battery-powered, microprocessor-controlled device that delivers a short train of electrical pulses to the brain through implanted leads.

I spoke with Dean Anderson, engineering manager at NeuroPace about their IC design approach.


Dean Anderson, NeuroPace

Interview

Q: What is your role at NeuroPace?

I’m an IC design manager working on the next generation of neuro-stimulators. We have mostly system and front-end engineers and contract the IC layout efforts.

Q: How did you get interested in IC design?

I’ve always been interested in DSP and bio-medical applications. Out of grad school I worked at a pace maker company, and then 12-14 years on very low-power embedded devices. Small size and low power are the big design challenges, along with FDA approval using clinical trials proving efficacy on patients. Once you submit data to the FDA then you have to await their decision, then it’s OK to sell into the American market.

Q: What is the IC design flow approach at NeuroPace?

All of our chips are mixed signal designs. We just taped out in December an AMS SoC. Our approach is more bottom-up, where we partition our design into sub-blocks, then implement each sub-block. Design follows the partitioning and system specification.

For the digital portion we prototype in an FPGA. We use Verilog for our digital design and verification of test-benches, and use both ModelSim (Mentor) and Incisive (Cadence).

Analog designs are simulated and may be placed into a test chip before the final AMS chip. Spectre (Cadence) is our SPICE simulator.

Schematic capture is with Cadence Virtuoso.

Integration adds the Analog and Digital blocks together. We need to do more simulation of the Analog and Digital blocks together. Interfaces between the blocks are made as simple as possible.

Q: What were your latest Chip specs?

It was about 8 million transistors, running at 5MHz to achieve nA levels. An idle chip consumes maybe 2 uA, while peak usage is 20 uA, turning on the radio raises us to mA levels. We are totally power-centric because of the long battery life we need for 5 years of operation, but the longevity depends on the needs of the patient. Our power supply is a lithium-based battery specially designed for implants. Our device is curved and is installed inside of a patient’s skull.

Our patients can go swimming because our device is sealed against the elements.

Q: When did you first start using EDA tools from Concept Engineering?

I started using their tools a few years ago at the pace maker company, GateVision Pro. It was the quickest way to navigate through large Verilog netlists. We continue to use this tool here at NeuroPace to navigate our digital and analog netlists.

StarVision PRO lets us visualize our AMS designs, and it’s more efficient to use this tool compared to Virtuoso.

Q: If you didn’t have StarVision, then what would you do instead?

We would have to manually look at netlists or buy something very expensive like Encounter, which is overkill. Encounter only shows gates and blocks, there’s no transistor-level to visualize, so it’s not as easy for us to use compared with StarVision.

Q: What about IP re-use?

A large part of our designs do re-use our own IP blocks. Some blocks we do buy IP for are memories. Microcontrollers are another IP block that we buy, and then integrate into our SoC.

Q: What foundries do you work with?

A: For the lowest-power applications we are restricted to foundries like On Semiconductor which is well-know for this industry.

Summary
NeuroPace has an AMS design tool flow for the implantable electronics market, and uses a variety of EDA tools from multiple vendors. Their very low-power requirements make for a very interesting design challenge.

lang: en_US


First FinFETs Manufactured at #50DAC!

First FinFETs Manufactured at #50DAC!
by Daniel Nenni on 06-09-2013 at 5:00 pm


This was my 30[SUP]th[/SUP] DAC and the second most memorable. The most memorable was my second DAC (1985) in Las Vegas with my new bride. We had a romantic evening ending with ice cream sundaes at midnight that we still talk about. This year SemiWiki had Dr. Paul McLellan, Dr. Eric Esteve, Daniel Payne, Don Dingee, Randy Smith, and myself in attendance so expect the best live #50DAC coverage right here, right now.

This year, for the first time, I was a DAC Speaker with my beautiful wife in the audience. I did the introduction to the Winning in Monte Carlo: Managing Simulations Under Variability and Reliabilitytutorial. My slides are HEREin case you are interested. Solido generously gave away the book: Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide to the attendees who stopped by their booth for a demo. Solido is a very clever company, believe it.

Speaking of Solido, by far the best booth promotion this year was the Solido 3D replicator manufacturing FinFETs. Hundreds of them were given away and now the engineers at Solido headquarters in Saskatoon have a new toy to play with:

On the show floor I talked to the majority of the exhibitors and found them to be pleasantly surprised at the turn out in regards to quality and quantity. I even took a look at the meeting room schedules of the companies I work with as foundry liaison and found them to be not just full but overbooked. They were doubling and tripling up demos and the customer names were very familiar to me, including my favorite fruit company. I’m a little sad that DAC is in San Francisco (my backyard) the next two years because it is good to see new places and eat new foods. The BBQ in Austin is legendary!

As I predicted the best new product at DAC was iDRM from Sage DA. People lined up to see it with two demo stations and two meeting rooms running non-stop all three days. The post DAC evaluation list for iDRM is the best (quantity and quality) I have seen in a long time.

The only DAC downside I saw was the lack of IP companies and the ones that did attend were not as busy as the EDA folks (except for ARM of course). To me this is a DAC organization problem. Semiconductor IP is critical to modern semiconductor design so let’s come up with a better IP strategy for next year.

The parties were also great this year. The venues all had good stories to tell. Monday night was the DAC party at Austin City Limits. We had VIP bracelets from Atrenta and Cadence which included great food, SpyGlass Margaritas, and perfect seats for the three band concert. You would have to pay $1k for this kind of outing so thank you Atrenta! Later we migrated upstairs to the Cadence party and my wife got an “I love DAC” tattoo on her back. Yes, my wife got a tramp stamp which was immediately texted to our kids to their horror. Tuesday night was the Synopsys press dinner at Malverde and the Denali party at Maggie Mae’s.

Sunday night we had dinner at the Driskill Hotel which is an incredible piece of history. While I worked the conference my wife had a spa day at the Hilton on Monday, a walking tour of Austin on Tuesday, and then joined me at the conference on Wednesday to meet SemiWiki subscribing companies. It was nice for her to put faces to the names she has had the pleasure to work with over the last two years. Wednesday we had a BBQ lunch at Iron Works and buffalo meatloaf for dinner at the Moonshine Bar and Grill. Our second best DAC in 30 years, believe it!

lang: en_US


Metastability and Fatal System Errors

Metastability and Fatal System Errors
by Daniel Nenni on 06-09-2013 at 3:00 pm

Metastability is an inescapable phenomenon in digital electronic systems. This phenomenon has been known to cause fatal system errors for half a century. Over the years, designers have used convenient rules of thumb for designing synchronizers to mitigate it. However, as digital circuits have become more complex, smaller and faster with reduced voltage, these rules of thumb are beginning to fail. A new tool from Blendics, MetaACE, has been developed that accurately evaluates metastability failures in contemporary SoCs.

A synchronizer settles to a valid output voltage in period of time that is without an upper bound. This settling-time regime is largely exponential with a time constant t. Throughout a number of the past semiconductor process generations, t has been proportional to FO4 delay and decreased in every generation, thus providing better synchronizer performance at each generation. However, a change in the relationship between t and FO4 delay has emerged at process geometries of 90 nm and below. Operating conditions and process variations further aggravate the situation and can cause many orders of magnitude variation in the MTBF of a synchronizer. As a result, traditional guidelines for synchronizer design are no longer adequate. To illustrate how these traditional rules of thumb fail, Figure 1 shows the effect of supply voltage on t and, in turn, on MTBF.


Figure 1. Settling time-constant τ, FO4 delay and MTBF as a function of the supply voltage (V) for a 65 nm CMOS synchronizer operated with a 200 MHz clock.

Note that t varies by almost an order of magnitude more than does the delay through an FO4 inverter. An equivalent increase in transistor threshold voltage Vth produces the same difference between the FO4 delay and t. Such an increase in Vth can occur under low temperature operation of the synchronizer. The combination of low supply voltage and low temperature can lead to sub-second values of MTBF and an almost certain system failure.

It would be advantageous to be able to predict synchronizer performance before fabrication. This would aid the designer in building a reliable, but not over-designed synchronizer (over-design adds area and latency to an otherwise reliable design). Blendics has developed a software system, MetaACE, that accurately predicts synchronizer MTBF.

Simulating a synchronizer can provide the essential parameters intrinsic to a particular semiconductor process, but more information is needed to estimate the MTBF of the circuit in a particular application. Extrinsic parameters such as clock period, clock duty cycle, rate of data transitions and number of stages in the synchronizer depend on the application and not on the semiconductor process. The MTBF for these various applications of a synchronizer design can be calculated given the intrinsic parameters, however. Figure 2 compares the calculated and simulated results for 2, 3 and 4 stage flip-flops for various clock periods and a data transition rate of 200 MHz.


Figure 2. Comparison of calculated and simulated estimates of MTBF.

It is clear from Figure 2 that there are extrinsic conditions under which even a 2 flip-flop synchronizer at a nominal supply voltage and temperature is unreliable. At a 1- nsec clock period, a typical double-ranked 90 nm synchronizer’s MTBF is less than a year and probably inadequate. Increasing the number of stages to four increases the MTBF to about 1010 years; more than adequate for most cases.

Manufacturers of mission critical products should carefully consider the risk of synchronizer failure and take the necessary steps so that their engineers and their semiconductor vendors will insure a satisfactory MTBF over a system lifetime, particularly when human lives are at risk.

About the Author
Dr. Jerome (Jerry) Cox, President, CEO and Founder, Blendics
Jerry received his BS, MS and ScD degrees in Electrical Engineering from MIT. From 1961 to 1973 he introduced small computers to biomedical research and participated in pioneering research in asynchronous computing. From. 1974 to 1991, he was chair of the Washington University CSE Department and in 1997 became founder and Vice President for Strategic Planning for Growth Networks. This venture-funded, chip-set company sold to Cisco in 2000 for $350M and eventually led to the top-of-the-line Cisco router, the CRS-1. Over his professional career he has taught and done research in computer architecture, medical imaging, statistical estimation theory and medical signal analysis. He has designed and built many digital systems, using both the synchronous and asynchronous approaches. He has published over 100 scientific papers and book chapters and has ten patents.

lang: en_US


DAC IP Workshop: Are You Ready For Quality Control?

DAC IP Workshop: Are You Ready For Quality Control?
by Paul McLellan on 06-07-2013 at 3:08 am

On Sunday I attended an IP workshop which was presented by TSMC, Atrenta, Sonics and IPextreme. It turns out that the leitmotiv of the afternoon was SpyGlass.

Dan Kochpatcharin of TSMC was first up and gave a little bit of history of the company. They built up their capacity over the years, as I’ve written about before, and last year shipped 15 milliion 8″ equivalent wafers. That’s a lot.

Ten years ago, TSMC could pretty much get away with throwing out the Spice rules and the DRC rules and letting design teams have at it. That no longer works because the complexity of the process means that each generation needs the tool chain to be adapted (for example, double patterning at 20nm) and nobody, even the biggest fabless guys, designs every block on their chip. IP for the process needs to be ready, especially memories, DDRx controllers, Ethernet and so on.

So TSMC started the IP alliance in 2000 for hard IP. Each block is tracked through a qualification process that starts with physical review (it must pass DRC or…fail), DRM compliance (ditto), pre-silicon assessment (design kit review), typical silicon assesment (tapeout review), split lot assessment, IP validation (characterization) and volume production (tracking customer use and yield). They have about 10,000 IP blocks in the system of which 1500 had problems, 373 of which were serious enough that they would have potentially been fatal. When a mask set costs $10M that is $3.7B in saved mask charges alone.


In 2010 they extended the program to soft IP (RTL level) working with Atrenta SpyGlass as the signoff tool. In the first go around, they focused on whether the RTL was correct and clean enough to pass Atrenta’s Lint checks, make sure the clocks were correct and so on. By the second version, using SpyGlass physical they were on the lookout for potential congestion and timing problems.

Next up was Mike Gianfagna from Atrenta. The focus of Atrenta at DAC this year is that the tools are now ready for RTL signoff. This doesn’t replace post-layout signoff which will still be required and it certainly doesn’t imply that design closure will simply happen without any manual intervention and ECOs. But it can catch a lot of problems early and ensure that the physical design process goes smoothly. The big advantages of working at the RTL level are twofold. Firstly, when problems are found they are much easier to address. And secondly, the tools (SpyGlass and others) run orders of magnitude faster than at the netlist or physical levels.


Run time would not matter if there was not good correlation between what SpyGlass predicted at the RTL level and what reality turned out to be post-layout. The most mature part of Atrenta’s technology is in the test area where they have been working for over 10 years. The prediction for stuck-at fault coverage at the RTL level is 1% off from the final numbers; for at-speed it is 2%. Power is more like 10%, area 5-10% and so on.

Atrenta/TSMC’s IPkit is now used by all partners in TSMC’s IP program. There are twice as many partners involved at this years DAC as were in 2012. IPkit 3.0 will add BugScope assertion synthesis to get better IP verification scripts.


After a brief break for coffee it was Sveta Avagyan of Sonics. She had been given a little design using IPextreme IP based around a Coldfire processor (68000 architecture). Sonics has various different interconnect and network-on-chip (NoC) technologies. Sveta showed us how to use the GUI to configure the whole subsystem interconnect. She could then use SpyGlass to make sure it was clean. Things that SpyGlass calls out as errors may, in fact, be OK and so one way to fix a problem is to issue a waiver that says it is actually OK. SpyGlass will record the waiver and track it all the way through the design process. Eventually, when the design is ready, it can be used in a chip or uploaded to IPextreme’s cloud-based infrastructure Xena.


Warren Savage discussed how Xena makes it easy for IP creators to upload designs, either fixed or parameterizable, to the cloud and for users to download them. However, Xena can also run SpyGlass (in the cloud) to produce reports on the quality of the IP, record waivers and so on.

So SpyGlass is now the de facto standard for IP quality. TSMC uses it for their soft IP program, IP providers such as Sonics can use it during IP creation (whether this is manual or something closer to compiler generated like Sonics), IPextreme can use it to qualify IP. Users can pull down some of the reports or run SpyGlass themselves on IP before deciding finally whether to use it or not. Everyone wants their IP to by SpyGlass clean (and that IPextreme and TSMC are happy with the quality too, not to mention their actual users).


Hierarchical Design Management – A Must

Hierarchical Design Management – A Must
by Pawan Fangaria on 06-06-2013 at 8:30 pm

Considering the technological progress, economical pressure, increased outsourcing and IP re-use, semiconductor industry is one of the most challenged industry today. Very frequently products get outdated leading to new development cycles. It becomes very difficult and costly to build the whole scheme of data foundation once again. A systematic management of design data and its re-use is a must in order to manage such frequent changes in product designs, thereby maintaining and improving economic health of the organization.

Last month I wrote about DesignSync, a robust design data management tool and its multiple advantages. Digging further into that data management methodology, I found about how INSIDE Contactless (an innovative company involved into designing chips for payment, access control, electronic identification etc.) used hierarchical design management methodology, offered by Dassault Systemes in its DesignSync tool, to turn a difficult situation involving scattered design data and IPs on different databases across multiple teams, spread across remote sites, into an opportunity with unified data management, leading to success in the business.

INSIDE used design tools from Cadence and Design Data Management (DDM) tool from Dassault, synchronized them to cater to modular data abstraction within the context of hierarchical configuration management and obtained excellent team collaboration across multiple sites resulting into productivity and time-to-market.

The concept of static and dynamic HREF (hierarchical reference) enables creation of multiple design modules and hierarchies under a root design that helps in bringing controlled flexibility and parallelism in the design development process with a unique database for the overall design and strict control on overall integration before release. The project hierarchies can also contain software, document, scripts and IPs from various sources with different time stamps along with the design data. The data repositories worked upon by particular teams can be placed at strategic locations to reduce network traffic. It’s a client – server platform with servers nested hierarchically at multiple sites.

The design is hierarchically built up with lowest unique abstraction of data called as “module” that has consistent collection of files and folders and has access commands like check-in, check-out and modify. Revision history is maintained at each level. HREFs connect modules and are processed when design data is fetched into the workspace. It provides a systematic automated integration of design under a unified Design Data Management (DDM) system which can be either single or distributed.

Dassault provides static and dynamic work flows – “SITaR” (Submit, Integrate, Test, and Release) and “Golden Release”. INSIDE used “SITaR”, that is suited for static HREFs referring to a specific release, at the time of tapeout when design and simulation are done on the baseline data. And “Golden Release” during development phase, where tags like “in development”, “Ready4Use”, “Golden” etc. were used on dynamic HREFs. This labelling on the hierarchical structure gives the integrator strict control over data. He/she can validate and integrate the static data efficiently without any delay. A detailed methodology can be found in Dassault’s whitepaper here.

This methodology fits well into the strategy for semiconductor PLM about which I had written earlier. This helps in efficient data management, intelligence build up for work estimation, scheduling and execution, cost estimation, and efficient and effective re-use of IPs to meet the challenges of SoC design and business.


Reshoring Semiconductor Manufacturing

Reshoring Semiconductor Manufacturing
by Paul McLellan on 06-06-2013 at 5:29 pm

So where in the world do you think semiconductor manufacturing is increasing the fastest? OK, Taiwan, that was pretty easy. But in second place, with over 20% of the world’s semiconductor equipment capital investment is the US. Growing faster than Europe, China, Japan and equal with Korea.

This was not the case half a dozen years ago. Intel was building its first fab in China at Dalian. AMD was ramping Dresden (Germany). Most semiconductor companies were transitioning to fab-lite models with modern processes being manufactured in Asia, and old fabs being milked using non-leading-edge processes. It seemed inevitable that semiconductor manufacturing would mostly be outsourced just like most other manufacturing.

And then suddenly it wasn’t. Just like GE seems to be doing in white goods (interesting article in the Atlantic here), suddenly new and expanded fabs are sprouting all over the US. AMD spun out their manufacturing to form GlobalFounries and one of the first decisions was to build a brand new state-of-the-art fab in Saratoga in upstate New York. Samsung decided to more than double their large fab in Austin, Texas, which I believe will be the biggest fab outside of Asia. Micron is expanding. Intel is expanding in Oregon and Arizona.

In 2013, it looks like over $8B will be spent on semiconductor equipment to outfit these new or expanding fabs. According to SEMI, the equipment company consortium:

  • Intel will spend up to $3.5 billion, primarily at their Fab 42 in Arizona and Dx1 Fab in Oregon
  • GLOBALFOUNDRIES will invest $1.2-$1.8 billion on equipment at their new fab in New York
  • Samsung will spend $1.8-$2.5 billion to increase capacity at their Austin facility
  • Micron, CNSE (NanofabX for G450C), IBM, and Maxim may collectively spend up to $1.5 billion in equipment this year

The numbers are expected to be even bigger in 2014.

See the SEMI report on this topic here.

And in case you’ve never heard of SEMI:

The industries that comprise the microelectronics supply chain are increasingly complex, capital intensive, and interdependent. Delivering cutting-edge electronics to the marketplace requires:

  • Construction of new manufacturing facilities (fabs)
  • Development of new processes, tools, materials, and manufacturing standards
  • Advocacy and action on policies and regulations that encourage business growth
  • Investment in organizational and financial resources
  • Integration across all segments of the industry around the world

Addressing these needs and challenges requires organized and collective action on a global scale.
SEMI facilitates the development and growth of our industries and manufacturing regions by organizing regional trade events (expositions), trade missions, and conferences; by engaging local and national governments and policy makers; through fostering collaboration; by conducting industry research and reporting market data; and by supporting other initiatives that encourage investment, trade, and technology innovation.