RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

EUV Makes Progress and Other Observations From SPIE

EUV Makes Progress and Other Observations From SPIE
by Scotten Jones on 02-26-2015 at 1:00 pm

The SPIE Advanced Lithography Conference is the world’s premier conference for patterning techniques utilized to manufacture semiconductors. At any given time during the conference there are multiple parallel sessions so it is impossible to see all of the papers presented. Prior to the conference I reviewed and blogged on some of the papers I was most interested in seeing presented. Now as the conference unfolds I wanted to blog about a few papers from each day that I thought were particularly interesting.

Monday 2/23 – day 1

EUV for SOC: Does it really help – Greg Yaric, ARM

The first thing that really struck me about this talk was the disconnect between transistor scaling and what actually happens in designs. My background is in processing and as process engineers we like to track scaling using metrics like gate pitch multiplied by metal pitch and SRAM cell size (and in fact he referenced both). By both of these metrics we are continuing to see scaling along historical trends. However, what was pointed out in this talk is that SRAM cell sizes reported in the literature are for 6T SRAM cells, in critical applications 8T and 10T SRAM cells are becoming common so even though transistors are scaling it doesn’t necessarily translate to designs. Even when a 6T SRAM cell is used it often has 2 fins per transistor. This has resulted in a situation where if you compare SRAM cell size versus frequency for 28nm to 14nm you see a 4X improvement in size at low frequency but at high frequency the size advantage gets much smaller. Many other issues all add together to create a situation where there can be cases where a longer gate length can actually result in a smaller die due to the ability to drive longer wires, avoid repeaters and other factors.

It was also discussed how gate length scaling has been slowing due to electrostatics and the increasing problems that variability creates. We got a one-time improvement with the move to FinFETs where the lower channel doping improves variability. It was expressed that FinFETs are great at 14nm and OK at 10nm but may not make 7nm without further improvements. Possible increased channel mobility materials like germanium look good in theory but in practice realistic contact resistance and contact size negates many of the advantages.

Via resistance is also a significant issue. If you look at Intel’s 14nm process the via aspect ratios have been reduced for that reason.

A hidden problem in the last few nodes is a half node to one node loss in scaling at metal due to all the new layout rules. Metal 1 rules are now so complex that routers can’t handle them and there has been an approximately 20% loss in area. This is where EUV could make a big impact, by relaxing the design rules the half to full node loss could be recovered. In the front end of the line (FEOL) SADP has resulted in improved LER and it was suggested that it was unlikely we would move back to single exposure but in the back end of line (BEOL) EUV could have a big impact.

To summarize this talk I would say there are a lot more challenges to scaling than just lithography. EUV has the potential to help but mostly in the BEOL.

Status of EUV Lithography – Anthony Ten, TSMC
Last year TSMC gave a very pessimistic assessment of EUV, this year the news was much better.

Last year at this time TSMC was only seeing about 10 watts of source power at intermediate focus, this year it is up to 90 watts. This represents the first time EUV has actually hit a source power milestone, in fact it is slightly ahead of where they thought they would be. The forward forecast is for 125 watts late Q2 and 250 watts late Q4. Both of these forecast goals will require the second generation light source.

Average tool availability is still only running 55%. Over an 8 week period with a 40 watt source TSMC ran 203 wafers/day for a total of 11,375 wafers (current ArFi tools can run >200 wafers per hour). After the 80 watt upgrade TSMC ran 1,022 wafers in a single day. These numbers are a huge improvement from last year although they still need to double for production. The tin droplet generator has to be replaced approximately every 4 days and it takes most of a day. ASML is working on an improved droplet generator. The droplet generator has to run at an amazing 50,000 droplets per second!

Current EUV photoresist are good down to a 16nm half-pitch, below that the required dose rises rapidly. A new photoresist with less blur is needed and there is a lot of work being done on metal based photoresists.

Mask blank defects are getting better but are still too high. With low enough defects and a precise map the defects can be hidden under an absorber. Current blanks can be used for via and contact but not for line/space masks. Mask inspection and repair is also still a work in progress. The following table summarizes the status:

[TABLE] border=”1″ align=”center”
|-
| style=”width: 129px” | Inspection type
| style=”width: 120px” | Current
| style=”width: 126px” | Intermediate
| style=”width: 171px” | Final solution
|-
| style=”width: 129px” | Mask blank
| style=”width: 120px” | 193nm
| style=”width: 126px” |
| style=”width: 171px” | 13.5nm (actinic)
|-
| style=”width: 129px” | Patterned mask
| style=”width: 120px” | 193nm
| style=”width: 126px” | eBeam
| style=”width: 171px” | 13.5nm (actinic)
|-
| style=”width: 129px” | Defect repair
| style=”width: 120px” | Wafer printing
| style=”width: 126px” |
| style=”width: 171px” | 13.5nm (actinic) tool from Zeis due later this year
|-

Pellicle development has produced a half size pellicle with 85.5% transmission. A full size pellicle with >90% transmission is still needed. TSMC is targeting a full size pellicle by the end of Q2.

In summary, excellent progress has been made this year but there is still a lot of work to be done before EUV is ready for high volume manufacturing. Assuming everything stays on track we could see readiness in 2016, possibly for a late 10nm node insertion. The question will then become how does EUV match up versus multi patterning solutions on a layer by layer basis.


Got FPGA Timing Closure Problems?

Got FPGA Timing Closure Problems?
by Paul McLellan on 02-26-2015 at 7:00 am

I had a meeting with Harn Hua Ng, the CEO of Plunify, a couple of weeks ago. They are an EDA company that I’d never heard of. Partially that is because they only play in the FPGA space, a country I visit less frequently than SoC land. Plus, they are based in Singapore, a country I have only been to a couple of times in my life.

Plunify was founded in 2009 and started work on a prototype (self-funded). In 2011 they received an investment from the Singapore government’s fund SPRING. In 2012 they had a beta version of their EDAExtend Cloud.

The basic idea was to do cloud-based optimization of FPGA designs. You fire up a couple of dozen servers out in the cloud, put your design out there, and it will optimize the timing closure much better than you can yourself using a mixture of parallel processing (trying lots of different options at once) and learning algorithms (once run #1 is done, the tool can select better options for run #2 until everything converges). The technology worked pretty well but there was one big problem that almost everyone who has tried to do cloud-based EDA has run into. Companies are not prepared to put their crown-jewel designs out in the cloud due to security fears, sometimes also company policy. Although the technology worked well, it was basically impossible to sell.

But it turned out customers loved the technology, just not the way that it was delivered. So Plunify pivoted and produced inTime, essentially the same technology but not running in the cloud, running on the customer’s own server farms.

So what does it do? FPGA designs (well, all designs) want to reduce area, reduce power and meet timing. The traditional approach when timing is not met is to use one or more of this menu:

  • Alter the RTL
  • If negative slack is small try fiddling with placement seeds
  • Tweak a few synthesis/P&R settings

What Plunify’s product, called inTime, does is a different loop:
[LIST=1]

  • Generate strategies based on its database (a strategy is a set of synthesis/P&R settings)
  • Implement all the strategies in parallel on the server farm (maybe 20-30 servers)
  • Use machine learning to analyze the results
  • Update the database with new knowledge
  • Go back to step 1 until either timing closure is reached or the tool determines that it is impossible and gives up

    In essence it is doing a highly optimized search of the whole space of settings (which might be of the order 10[SUP]100[/SUP] choices, so brute force has no chance) until it finds a strategy that meets timing (zero negative slack). To make it clearer the diagram below (click to enlarge) is inTime running a large number of strategies. The orange bars have a lot of negative slack. The green bars still have negative slack but they are the best solutions, ones that in round 2 InTime will start from and try strategies nearby. In fact, like simulated annealing, it sometimes tries stuff that is far away to avoid getting trapped in local minima.


    In round 2 there are still lots of horrible results with large negative slack, but there are also now 7 solutions that pass with zero negative slack.


    The results are impressive. Here is an example of Huawei doing a design with 98% utilization. To put that in perspective, many companies have a rule that if utilization is over 60% then go to the next bigger array. Doing a design at 98% utilization is almost insane. But in 6 rounds, with a total time elapsed of 10 hours, inTime succeeds in getting closure.


    Once a design has reached closure, even if the RTL is changed (up to 15-20% even) then inTime will start from the good strategies from last time and achieve closure much more quickly than from a standing start.

    One of the benefits of inTime is that instead of spending 4-6 weeks fixing timing manually (and perhaps failing), inTime will find a solution in just a few days from first seeing the design.

    Of course it doesn’t always succeed, it is possible to simply demand more than is possible from the FPGA. But when it does fail it will reveal which paths were critical most often in its runs thus giving guidance to the designer where it might make sense to consider changing the RTL.

    A second benefit of inTime is that if you take a design that is at, say, 60% utilzation and drop it down to the next smaller (and cheaper) array, the utilization may now be 90%. But if inTime can close timing you have a huge saving by being able to use a smaller array. In a similar way, it might be possible to get a design into an array of the same size but a lower speed-grade, again saving on cost.

    There is a webinar on Plunify’s inTime titled Got FPGA Timing Closure Problems? It is on Tuesday March 10th at 10am Pacific Time. The presenters are Harn Hua Ng, Plunify’s CEO, and Tim Davis, the president of Aspen Logic, an FPGA design and consulting services group. The webinar will be moderated by Dan Ganousis.

    Plunify’s has a silicon valley presence with an office in Los Altos. Their website is here. You can register for the webinar here.


  • Arteris Sees Consolidation Amid ADAS Gold Rush

    Arteris Sees Consolidation Amid ADAS Gold Rush
    by Majeed Ahmad on 02-25-2015 at 10:00 pm

    The sensor fusion in vehicles is leading to a new era of information sharing from almost all components of a car, including chassis, suspension and rapidly taking off Advanced Driver Assistance Systems (ADAS). According to network-on-chip (NoC) interconnect IP solution provider Arteris Inc., as more cameras and sensors are added to cars, the scale of the electronics content required to make sense of this information will also go up.

    In other words, computational consolidation is taking place with bigger system-on-chip (SoC) devices that are gradually replacing MCUs built into car’s electronics subsystems. The so-called ‘computational consolidation’ comes as a result of the advent of more powerful SoCs needed to take information from all sensors, put it together and make it ready for apps.


    Advanced Driver Assistance Systems or ADAS
    (Image: Marvell Technology)

    Kurt Shuler, VP of Marketing at Arteris, told SemiWiki that sensor fusion is a much harder job in cars where there are so many objects to watch and these objects are mostly in a state of relative motion. Take ADAS feature, for instance, which tracks road conditions, lanes, pedestrians, etc. The ADAS technology is making a proactive use of low-cost sensors and cameras to improve car safety and help avoid road accidents, which makes it a key highlight of the connected car movement. And, for ADAS sockets, traditional MCU guys are consolidating into bigger chips.

    Arteris is betting big on connected car standards like ADAS and the bigger SoCs that are becoming imperative to make ADAS a commercial success. The Campbell, California–based firm has recently joined hands with Yogitech S.p.A. to add functional safety verification IP on top of its FlexNoC Resilience Package IP solution. The partnership between Arteris and Yogitech will allow car SoC designers to automate the required ISO 26262 test coverage and fault injection needed for the car safety certification.


    Arteris and Yogitech: ISO 26262 certification solution
    (Image: Arteris Inc.)

    Shuler said that ADAS is all about functional safety. He added that the promise of ADAS and car safety has attracted new entrants into the automotive SoC market. Shuler mentioned Nvidia and Qualcomm. “There are a lot of consumer electronics companies that have experience with camera and application processor technology and they are attacking hard on the ADAS market.”

    An ADAS Success Story

    Shuler recalls how the notion of ADAS initially remained lackluster mostly because of its reliance on expensive radars. Then, there came Mobileye, one of Arteris customers, which put cheap mobile phone cameras into cars and supported them with a strong software algorithm and necessary processing power through the EyeQ chip. Mobileye developed the EyeQ image processing chip in collaboration with STMicroelectronics back in 2006.


    Camera and processing module
    (Image: Mobileye)

    Mobileye’s ADAS technology was based on a vision system that used a single-chip, which in turn, significantly reduced cost and packaging complexity for car OEMs. However, at that time, Mobileye didn’t get much attention from tier-one car vendors. So it began selling its ADAS technology in the aftermarket to existing vehicle owners.

    Mobileye set up a subsidiary in Los Angeles to pitch its ADAS technology to distributors for use in truck fleets as well as to individual car owners for about US$1,000 a unit. It also signed up with distributors in Europe and Japan. The value proposition became apparent in a couple of years and seven of the top 10 auto industry suppliers, including Continental, Delphi and Magna International, eventually installed Mobileye’s ADAS technology.

    Fast forward to 2015, Mobileye claims that its ADAS feature will be available in 237 car models from 20 car OEMs, including BMW, Chrysler, Ford and General Motors. Moreover,according to a recent press release from STMicroelectronics, EyeQ vision processor has been deployed in more than one million vehicles around the world. Mobileye went public in July 2014 and now has a market cap of US$10 billion.

    Mobileye’s ADAS technology employs a combination of forward-facing cameras and low-cost radars to detect pedestrians, cyclists, construction zones, barriers and debris on the road. It can also analyze traffic lights and road signs.

    A monocular camera magnifies images and then uses software to calibrate how much time is needed to brake to avoid a collision. An alarm goes off if the driver gets too close; the system can automatically hit the brakes if the driver is not able to respond to the threat in a given time.

    V2V/V2I: Long Way to Go

    Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) schemes—sometimes collectively known as V2X—are commonly pinned as a car safety standard competing with ADAS technology within the larger connected car landscape. However, as Arteris’ Shuler pointed out, V2V and V2I technologies have a long way to go.

    For a start, there will be a lot of conflict on who invests in the road safety infrastructure—public sector or private sector. States, countries and municipalities will have to see ROI before they can justify investing in V2V and V2I technologies.

    Then, Governments around the world have to be involved to balance public and private interests, ranging from frequency spectrum allocation to public rights of way to private property for infrastructure. That can take years, even decades to have V2V and V2I standards and enough infrastructure development in place where a lot of companies can compete.


    Connected car: ADAS vs. V2V/V2I
    (Image: Reuters)

    Meanwhile, National Highway Traffic Safety Administration (NHTSA) in the United States will require all new cars under 10,000 pounds to have rear-view cameras by 2018. Shuler said this requirement will create a ready-made socket for adding more advanced ADAS technology.

    Shuler added that in-car infrastructure components will come quicker than public infrastructure, and that favors the ADAS technology that is here and is building momentum one chip at a time.

    ADAS Going Mainstream One Chip at a Time

    Arteris Adds Functional Safety to NoC Interconnect IP, Aims Auto SoCs


    Who Leads Semiconductor Innovation?

    Who Leads Semiconductor Innovation?
    by Pawan Fangaria on 02-25-2015 at 5:30 pm

    Semiconductor business is highly dependent on technology and that changes very rapidly in the semiconductor space. It’s important to recognize the importance of research and innovation activities in this space. In my last article on 7nm technology node, one respondent commented, very rightly, “It’s important to have competition which gives rise to innovation in the semiconductor industry”. Well, if I look at from competition perspective that is very intense in semiconductor space. In a couple of my earlier articles, “Look Who is Leading The World Semiconductor Business” and “Is Fab Business The Forte of APAC?”, Asia-Pac appeared to be the leader in semiconductor business. However, after seeing the R&D investment done by top semiconductor companies around the world, I have to change my thought process. Okay, production and sales, and of course substantial amount of R&D, have spread across the world due to several factors. The semiconductor business is concentrated in Asia-Pac today. However, in terms of R&D spend that drives innovation, USA is the undisputed leader. That reminds me about a general newspaper quote that referred the USA to be the single engine driving the world economy.

    Yesterday, I was studying the IC Insight’s report on top semiconductor R&D spenders. It was clear that R&D activities in the semiconductor space are concentrated in the USA region.


    Among the top10 semiconductor R&D spenders, there are five companies in USA, three in Asia-Pac and one each in Japan and Europe. If we sum up these five American companies R&D spends, that comes out to be $22203M (~70% of total top10 R&D spend) in 2014 and $19302M (~67% of total top10 R&D spend) in 2013. On a worldwide basis, in 2014, the five American companies among top10 semiconductors R&D spenders accounted ~40% of total worldwide semiconductor R&D spends of $56B. Even if we go beyond these top10 R&D spenders, we see companies like Texas Instruments, SK Hynix, Marvell, AMD and Avago, in that order; you know about the USA companies among them. If we look at the Asia-Pac figures, that comes to $6269M (19.7% of total top10) in 2014 and $5553M (19.3% of total top10) in 2013.

    There are more interesting data in the table to chew upon. If we see the increase in R&D spend in 2014 compared to 2013, in USA it increased by ~15% while in Asia-Pac it increased by ~12.8%. In Japan and Europe, R&D spends declined.

    Company wise, Intel, the topmost semiconductor company, spent the highest ~36% of top10 spending and 21% of total worldwide semiconductor R&D spending of ~$56B. Qualcomm, the arch-rival of Intel, continued to maintain its second position and also increased the R&D spends by a massive 62%. At the third place, Samsung maintained its R&D spends with just ~5% increase. However, we know Samsung foundry is collaborating in technology process R&D with two American companies, IBM and GlobalFoundries and is doing well in 14nm FinFET technology. Notable among the list is MediaTek’s dramatic entry into the ranks of top10 R&D spenders. MediaTek (along with its acquisition of MStar) gives severe competition to Qualcomm in Chinese region.

    Another important point to note is about R&D/Sales ratio. It’s the least in case of TSMC, 7.5% in 2014. And it’s highest for Nvidia, 31.3%. I know Nvidia promotes R&D programs even for global student and research community with handsome annual grant of the order of $150000. Look at my last year’s blog about Nvidia’s research and education activities, “Wanna start something new? Try this…”. The Global Impact Award finalists have been announced this month, look for details here. The other high R&D/Sales ratio companies are again in USA, Qualcomm and Broadcom have R&D/Sales ratios of more than 28%.

    As a concluding remark I must say that in most of my observations and analysis, I have found USA to be a region where there is an inherent cultural of spending in R&D activities, at various levels. The above facts about the semiconductor R&D spends by USA companies strengthen my belief about USA being the R&D and innovation leader.


    Vietnam: Rising Star in Electronics

    Vietnam: Rising Star in Electronics
    by Bill Jewell on 02-25-2015 at 1:00 pm

    I recently returned from a trip to Southeast Asia, including Vietnam. The trip was for pleasure, not business, but I could not help but notice the boom in economic activity. The coastal cities of Hai Phong, Da Nang and NHA Trang were trying to outdo each other in building hotels, bridges and amusement parks – largely to cater to foreign tourists. A trip up the river to Ho Chi Minh City (previously Saigon) revealed many huge industrial buildings including several under construction.

    How does Vietnam fit in the electronics industry? The chart below shows exports of electronic equipment for key Asian nations from the World Trade Organization (WTO). The data excludes semiconductors and components. China remains the dominant Asia electronics exporter with US$477 billion in 2013. The next largest exporter is South Korea at $49 billion. Vietnam is eighth at $18 billion. However over the five years from 2008 to 2013 South Korea, Japan, Malaysia and Singapore have declined in electronics exports. Thailand and Taiwan each had a compound annual growth rate (CAGR) of only 2% from 2008 to 2013. Vietnam had an explosive 43% CAGR. Although the growth rate of Vietnam electronics exports is certain to slow in the next few years, Vietnam could eventually challenge South Korea as the second largest Asian exporter of electronics.

    Vietnam had a population of 89.7 million in 2013 and a labor force of 53.2 million. The labor force is fairly young compared to many industrialized countries, with 50% under 40 years old. Total mobile phones exceed the population with smartphones at 23% of the population. 41% of the population are internet users, the same percentage which own motorbikes. If you have ever seen the bedlam on Ho Chi Minh City roads, it does appear everyone is using a smartphone while riding their motorbike.

    [TABLE] border=”1″
    |-
    | style=”width: 188px” | Vietnam
    | style=”width: 180px” | Millions, 2013
    | style=”width: 163px” | % of population
    | style=”width: 187px” | Source
    |-
    | style=”width: 188px” | Population
    | style=”width: 180px” | 89.7
    | style=”width: 163px” | 100%
    | style=”width: 187px” | Vietnam government
    |-
    | style=”width: 188px” | Labor force
    | style=”width: 180px” | 53.2
    | style=”width: 163px” | 59%
    | style=”width: 187px” | Vietnam government
    |-
    | style=”width: 188px” | Mobile phones
    | style=”width: 180px” | 140
    | style=”width: 163px” | 156%
    | style=”width: 187px” | IDC
    |-
    | style=”width: 188px” | Smartphones
    | style=”width: 180px” | 21
    | style=”width: 163px” | 23%
    | style=”width: 187px” | IDC
    |-
    | style=”width: 188px” | Internet Users
    | style=”width: 180px” | 36.6
    | style=”width: 163px” | 41%
    | style=”width: 187px” | Internet live facts
    |-
    | style=”width: 188px” | Motorbikes
    | style=”width: 180px” | 37.0
    | style=”width: 163px” | 41%
    | style=”width: 187px” | Vietnam government
    |-

    Foreign direct investment in Vietnam in 2013 totaled US$21.6 billion, according to the government. The largest investments by country were Japan at 27%, Singapore and South Korea each at 20%, and China at 11%. Electronics companies with significant investments in Vietnam include Samsung Electronics, LG Electronics, Microsoft’s Nokia division, Intel, Foxconn (Hon Hai) and Jabil. Vietnam experienced steady GDP growth ranging from 5.2% to 6.4% each year from 2008 to 2013 despite the global recession in 2009. There have been some problems recently. In May 2014, many Vietnamese protested a dispute with China by rioting and damaging many factories in Vietnam. Although Chinese-owned factories were targeted, some factories owned by Taiwanese, Japanese and South Korean companies were also damaged. The government appeared to successfully end the protests.

    Vietnam has taken inspiration from China, having a capitalist economy with a communist government. Vietnam is well situated geographically – bordering China, in the center of Southeast Asia, generally less susceptible to the natural disasters (earthquakes, tsunamis and typhoons) other countries in the area have experienced, and with several good ports along over 2000 miles of coastline. Vietnam is on its way to becoming a significant country in electronics manufacturing and semiconductor consumption.


    A Brief History of CLKDA: Every Picosecond Counts Below 28nm

    A Brief History of CLKDA: Every Picosecond Counts Below 28nm
    by Paul McLellan on 02-25-2015 at 7:00 am

    One thing to point out is that the CLK of CLKDA are the initials of the founders, they are not focused on clocks! I’m sure you can guess what DA stands for, although it is also the last two letters of the fourth founder’s name.

    They have been in existence since 2005, backed by Atlas Ventures and Morgenthaler. They are headquartered in Littleton, MA just outside Boston. The CEO is Isadore Katz.

    In the early days they did some other stuff (STA) but they have since pivoted and now CLKDA is the market and technology leader in timing variance analysis. FX is the first transistor model and simulator specifically engineered for digital variance and delay analysis. FX is in production at the most advanced IC geometries at 20nm, 16nm and 14nm, with all of the leading foundries. CLKDA drove the creation of Liberty Variance Format (LVF), the open standard for modeling timing variance.

    Starting at 40nm, manufacturing variance became a serious issue that had to be addressed during timing sign-off. Traditional manufacturing corners were no longer sufficient. If designers ignored manufacturing variance, yield would suffer when it bit them. But over-compensating didn’t work either: the center of the distribution of timing (typical) moved from the previous node, but excessive pessimism meant that worst case was almost unchanged, making timing closure next to impossible, and wasting power. To make things worse, the tails were also getting longer.

    CLKDA brought together a team of EDA and semiconductor veterans with expertise in timing, circuit design, and simulation, as well as applied mathematics and distributed computing. The result was a very efficient, fully distributed static timing framework combined with a radical new circuit simulator and model called FX—the first transistor level model specifically designed for timing delay and variance. What makes FX special is its ability to model timing variance without using sampling—variance is solved mathematically. FX can literally be tens of thousand of times faster than MC SPICE, and stay within 2% of SPICE results.

    CLKDA began development of FX in collaboration with TSMC in 2008. The first targeted application was the efficient generation of stage-based on-chip variation tables (SBOCV aka AOCV) for use during sign-off timing. Generating these tables consumed literally months of Monte Carlo SPICE simulations, and a much better solution was required.

    The result was AOCV FX. Introduced at DAC 2010, and included in TSMC Reference Flow 11, AOCV FX was the first commercial solution for generating SBOCV and AOCV tables. Using the FX model, AOCV FX is thousands of times faster than using Monte Carlo SPICE for generating derate tables with the same compute resources. AOCV FX made SBOCV and AOCV table generation a practical reality.

    Since 2010, FX has evolved into family of products that address high accuracy timing and variance. Each of the FX Applications solves mission critical problems for chip frequency, yield, and time to market. They complement existing sign-off flows by adding variance information (e.g. derates) or addressing critical gaps in the flow (critical path timing waivers).

    In 2013, Variance FX was introduced to extend CLKDA’s derate analysis. In addition to AOCV, Variance FX supports POCV, SOCV, and Liberty Variance Format. It generates delay and slew variance information, as well as variance models for timing constraints (set up and hold uncertainty). Macro FX was introduced in 2014. Macro FX extends Variance FX for complex functions such as flop trays, retention flops and other large custom cells.

    Path FX was introduced along with AOCV FX. Path FX delivers Monte Carlo SPICE accurate timing with the ease of use and performance of a general purpose, static timing analyzer. Path FX can run tens of thousands of paths in minutes with MC SPICE accuracy. In 2013, Clock FX was introduced to address the specific requirements of high analysis clock trees.

    CLKDA is extending its product capabilities to address any part of an SoC where digital circuitry could fail due to variance; at the cell, path or full chip level. So if you are designing a chip (or some IP) in an advanced node then you need to worry about variance and preemptively address it.

    CLKDA’s website is here.


    Samsung 14nm IS in Production!

    Samsung 14nm IS in Production!
    by Daniel Nenni on 02-24-2015 at 10:00 pm

    There is quite a debate raging on whether Samsung Foundry is truly in production at 14nm. The word amongst the fabless semiconductor ecosystem is yes and this comes from two very large fabless companies that are reportedly using Samsung for 14nm and have even started looking at Samsung 10nm. Of course you can Google for stories by the foreign press about this and find just about whatever you need to support either side of this argument which is what many people have done. The fact of the matter is that the fabless semiconductor ecosystem is a very close knit industry so there are no secrets.

    Really, all you have to do is attend some of the many conferences we have every year and talk to the people who actually do the work. SPIE and ISSCC are both this week. CDNLive is coming up as well as SNUG and User2User which are filled with semiconductor professionals from Silicon Valley and let’s face it Silicon Valley is where most of the fabless semiconductor magic happens, absolutely.

    Still there seems to be some confusion about which process the upcoming Apple products will use. Again, the iPhone will have Samsung 14nm based ULP SoCs and the iPad will have TSMC 16FF+ based SoCs which represents about a 70/30 wafer split in favor of the iPhone of course.

    Speaking of SoCs, in the SemiWiki SoC Forumthere is a thread about the latest Intel 14nm Cherry Trail SoC benchmarked against the 20nm Apple A8x, the 20nm SnapDragon, and the 14nm Exynos. Not a pretty picture which begs the question: Is it the design or the process? You gotta love open forums for a constant flow of raw information, crowdsourcing at its finest! Just pick a SemiWiki forum of interest and subscribe to it, simple as that.

    Another data point about Samsung being in production at 14nm is a mailer I just received. (Yes, I’m on just about every mailing list imaginable!):

    Collaborating with its top-tier design enablement and IP partners, Samsung has been
    working steadily to ensure that its first FinFET node offers industry-leading power/
    performance/area.

    With 14nm FinFET fully qualified, Samsung Foundry has begun production at our manufacturing facilities in Korea and the U.S. Customers looking to start a 14nm FinFET design also have access to Samsung Foundry’s regional design teams to ensure the chip design is optimized for first time silicon success.

    Check out the latest 14nm news from Samsung foundry business

    Samsung Foundry Website

    Samsung Foundry Blog

    Samsung 14nm FinFET Datasheet

    Follow Samsung Foundry

    Samsung Foundry
    Advanced process technologies, manufacturing expertise, and first-class services
    Learn why Samsung Foundry is a critical resource for competitive fabless and integrated device manufacturer semiconductor companies. Samsung Foundry offers deep expertise in advanced process and design technologies as well as an excellent track record in high-volume manufacturing. We offer a full range of foundry capabilities from design engagements to turnkey projects, with a focus on leading-edge process technologies from 90nm to 32/28nm on 300mm wafers and beyond.

    Benefit from Samsung’s optimized foundry solutions
    By outsourcing some or all of the design and manufacturing details to Samsung, you can be confident of maintaining the highest possible product quality while saving time and cost. Samsung Foundry provides a full range of solutions including advanced process technology, design services, design intellectual property (IP), and manufacturing facilities. Customer support is available at every step, from the initial engagement to volume manufacturing. And customer IP is stringently protected.


    ASML ASyMptotic progress- When will we get to EUV?

    ASML ASyMptotic progress- When will we get to EUV?
    by Robert Maire on 02-24-2015 at 5:30 pm

    • ASML making progress – but is it fast enough?
    • ASML has missed 10nm , can it catch 7nm? An economic question
    • Day one at SPIE- Better tone than last year but still cautious

    1000 simulated wafers versus 700 simulated
    At the opening of the SPIE conference ASML announced that TSMC had reached 1000 wafers a day “exposed” (not printed or produced) by TSMC.

    This is significant in two ways; though still just a simulation and not a real test of real wafer production it is a higher theoretical number than the test numbers “leaked” out by IBM over 6 months ago. The second and perhaps more important is that the test was run by a real contender in the semiconductor arms race, TSMC who last year embarrassed ASML at SPIE by announcing that the tool had shot itself in the foot. This would seem to imply that TSMC is more supportive which is also evidenced by their continued purchases of tools.

    Is progress fast enough? Zeno’s paradox…

    Though progress is clearly being made , we remain concerned that the amount of money and effort being put into EUV is producing fewer and smaller gains as we try to get closer to a “production” system. It would appear much like Zeno’s paradox or an asymptotic curve that incremental progress is slowing as we get closer to the goal.

    The announcement of 90 watts of power is certainly better than the 75 watts previously discussed but it would have been a lot better to be talking about a doubling to 150 watts especially as we live in a binary world of Moore’s law.

    Catching a moving Moore’s law train … That already left the 10nm station
    The reason for our concern about progress rates is that the industry and Moore’s law is not waiting around for EUV to catch up. From discussions with a number of people at the show its clear that 10nm is long gone (as has been known by those in the industry) but the new question is how much, if any, of 7nm can ASML catch. Whereas there never seemed much very serious talk of ASML making 10nm (except by ASML) there is a lot of speculation about a 7nm intercept.

    Economics enters the picture
    Everything always comes down to the final arbiter of money. This year at SPIE there is clearly more talk about the cost of EUV versus multi-patterning. There was a good presentation of the cost of HiNA (high numerical aperture ) EUV versus multi-patterning.

    We have suggested in the past that there should be an economic crossover point from multi-patterning where the EUV production decision becomes clear but it sounds as if that line is blurring a bit. Part of the reason is that the delay in EUV has caused other complications that may confuse the simple economic choice for EUV. One example is the need for multi-patterning in EUV anyway by the time it gets to HVM, thereby taking away one of the positive attributes . However its still hard to see how multi-patterning can win in the long run as we hear talk about quad and “oct” patterning as if they were viable alternatives forever.

    It would be wrong to underestimate the semiconductor industry’s aversion to change…..and the industry has gotten very comfortable with multi-patterning.

    No Breakthroughs or new news…

    There does not appear to be any new news or break through moments so far at SPIE with ASML’s announcement being a ho hum confirmation of the slow pace rather than a positive surprise.

    Alternatives not ready
    DSA (directed self assembly) , NIL (nano imprint lithography) and direct write E beam lithography are still works in progress further behind than EUV but also not showered in money as EUV has been.

    Canon appears to be furthest along with using NIL at Toshiba for NAND production and we wouldn’t be surprised to see it in limited use at some point. The talk of alternative technologies at the show has quieted as discontent with EUV has abated a bit.

    Infrastructure not ready
    The “ecosystem” for EUV is further behind than EUV itself and will clearly limit the introduction of EUV whenever it really becomes available. This is not new news as we have been talking about it for a long time and the problem has not changed nor have there been any changes in the significant participants, such as KLAC. This remains a major bottleneck for EUV’s progress to HVM.

    No stock impact
    As there isn’t anything incrementally positive to find at SPIE so far, we see no reason for any significant change in stock valuation. EUV remains a work in progress without a clear insertion point and alternatives have their issues as well.

    We do continue to believe that both Lam and AMAT will have a long positive run in etch and dep to support multi-patterning which will clearly be around for quite a while.

    Robert Maire
    Semiconductor Advisors LLC


    High Level Synthesis Gets Stronger

    High Level Synthesis Gets Stronger
    by Daniel Payne on 02-24-2015 at 1:00 pm

    High Level Synthesis (HLS) tools have been around for at least two decades now, and you may recall that about one year ago Cadence acquired Forte. The whole promise of HLS is to provide more design and verification productivity by raising the design abstraction from RTL code up to SystemC, C or C++ code. With any acquisition it is natural to ask a few questions, like:

    • Which EDA tool will live, and which will die?
    • What is the new product roadmap?
    • What happens to all of my legacy design work, will it be supported?

    I spoke with David Pursley of Cadence on Monday to get an update on what they’ve been doing for the past 12 months in HLS. The really good news is that they’ve combined the best features of the Forte Cynthesizer tool with the Cadence C-to-Silicon Compiler tool, and named it Stratus HLS.

    Related – Cadence Acquires Forte

    This means that any customer already using Cynthesizer or C-to-Silicon Compiler can continue using their favorite HLS tool, or upgrade to the Stratus HLS tool to get the best of both tools. The Stratus HLS learning curve for existing users will be quite brief. The overall design flow stays the same where you can perform functional simulation with Incisive, formal analysis with JasperGold, HLS with Stratus, and logic synthesis with Encounter RTL Compiler:

    Zooming in a bit into the HLS flow there is the familiar input languages (SystemC, C, C++) and RTL output:

    Users of Stratus HLS manage the big picture items:

    • Function
    • Architecture
    • Constraints

    Automation from Stratus then boosts productivity by managing:

    • Schedule of operations
    • FSM encoding
    • Area reduction
    • Timing
    • ECO
    • Clock gating
    • Pipeline registers
    • Consisten RTL style
    • Sharing datapath components

    You can start to think about using this type of HLS on your next SoC design including both control and datapath logic, instead of constraining HLS to only DSP blocks. The interface IP and floating-point IP give you a re-use head start with synthesizable optimized SystemC building block.


    Graphical analysis with links to source code

    Blu Wireless Technology is an early user of Stratus HLS and they designed a multi-gigabit modem and have an early working prototype thanks to the automation provided even while the specifications were changing.

    Related – White Paper about Blu Wireless

    HLS has just become stronger as Cadence offers up the Stratus HLS tool as a combination of Forte Cynthesizer and Cadence C-to-Silicon Compiler tools. The HLS market continues to grow because users can measure their productivity improvements, QOR and benefits from high-level IP re-use.

    Related – SystemC HLS Methodology


    eSilicon Just Taped-out a SonicsGN-based SoC. And It’s Not a Secret

    eSilicon Just Taped-out a SonicsGN-based SoC. And It’s Not a Secret
    by Paul McLellan on 02-24-2015 at 7:00 am

    I slipped into the shadows at the back of the bar in the Tenderloin. Mid-afternoon on a weekday, almost nobody in there.

    “So you’re with the NSA?” I asked.

    “I can’t confirm that,” the man said.

    “The Network Stealing Agency.”

    “That’s not what it stands for,” he said indignantly. “It’s the National Security Agency. We ensure…well, that nothing bad happens.”

    “And what have you stopped?”

    “I’m not cleared to that level. But I’m assured something, so I’m sure it would have been bad. Anyway, I hear you have found out something about a network. We are always interested in networks.”

    “Indeed. Apparently Sonics have been used as the NoC on a big SoC that eSilicon have been doing for a customer. They just taped-out.”

    “Knock?”

    “No, not knock. NoC. Network-on-Chip. They used SonicsGN. It is their most advanced NoC.”

    “How did you find this out? Did Edward Snowden leak it to you?”

    “They put out a press release this morning.”

    “Cunning. Hiding their secrets in plain sight. Like that Edgar Allen Poe story.”

    “I don’t think they want it to be secret. You mean you didn’t know this already? You could’ve just read the news-wire.”

    “That’s not our style. We like to be more indirect. We break into the company that makes the simcards for mobiles and steal the encryption keys for all the phones. Then we listen to all the calls. Then we run them through speech-to-text. Analyze for keywords. Run them through our million-server cloud farm datacenters. We know if anything important is going to happen pretty quickly. Maybe just a week later. I bet these Sonics and eSilicon people have been talking.”

    “I’m sure they have. And the customer. There has to be a customer. eSilicon doesn’t make chips for themselves. They are a fabless ASIC company.”

    “So it is a secret who the customer is. Secrets are our business. I bet we could find out who it is.”

    “You will have the answer in a few months?”

    “Maybe quicker. So why did these guys pick Sonics? It seems like it might be a big deal.”


    “It was very high performance, 500GB/sec. They needed lots of flexibility. The schedule was aggressive so they wanted confidence that place and route would be straightforward and that timing closure would be fast.”

    “Is it a big chip?”

    “Yes, but much smaller than it could have been. With SonicsGN they didn’t waste all the area that hand-created interconnect based on buses would need.”

    “So this network-on-chip thing. It’s all state-of-the-art multimode fiberoptic?”

    “No, chips don’t work like that. It’s all copper.”

    “Copper! Like in the olden days. Very retro. Are these guys all hipsters?”

    “Right. All self-respecting IC designers wear scarves and hats, and ride fixies.”

    “Really,” he said. “I didn’t know that. Do they all drink PBR?”

    I sighed. “And some process technologies don’t just have copper. They even have air gaps to increase performance.”

    “Air gaps. That’s something I know about. When an organization doesn’t have external connections to the Internet. We need to use cunning to get across an air gap, like with compromised thumb drives. Or turning their cell-phone microphones on and listening to their typing.”

    “Is that really a thing?”

    “I couldn’t possibly say. So how do I find out more about these NoCs?”

    “There is an introductory webinar you can watch. NoC 101. The chief technology officer of Sonics presents it.”

    “An officer? Like a 4-star general?”

    “Not exactly. Are you interested in power?”

    “Of course. We are the government. Oops, slip of the tongue. I mean we ‘work for’ the government. But yes, we are interested in power. The more the better.”

    “That’s not how chips work. We like less power.”

    “Less power. Who ever got anywhere with less power? Don’t you guys read Machiavelli?”

    “Otherwise the chips get too hot. There is another webinar about that. NoC 102. That officer guy presents that one too. You can learn things like how the NoC can automatically power up and down blocks without the control processor being powered-up.”

    “So how do I find these secret webinar thingies?”

    “They are not secret. They are on the Sonics website. Just go to sonicsinc.com/resources/webinars.”

    “Wow. So simple. I’ll get some supercomputers downloading and analyzing them immediately.”

    “You could be watching in seconds on your phone.”

    “They don’t let us have our own phones. Too big a security risk. We may not be the only people to steal all the simcard keys.”

    “I need to go,” I said. “I have a piece to write.”

    “So where do you publish?”

    “SemiWiki. We follow the industry so you don’t have to.”

    The Sonics press-release is here.