SILVACO 073125 Webinar 800x100

MemCon Returns

MemCon Returns
by Paul McLellan on 07-25-2012 at 9:44 am

Back before Denali was acquired by Cadence they used to run an annual conference called MemCon. Since Denali was the Switzerland of EDA, friend of everyone and enemy of none, there would be presentations from other memory IP companies and from major EDA companies. For example, in 2010, Bruggeman, then CMO of Cadence, gave the opening keynote but there were also presentations from Synopsys, Rambus, Micron, Mosys, Samsung and lots of others. Plus, of course, several presentations from Denali. The format and time varied, sometimes it was 2 days in June, sometimes 1 day in July.

When Cadence acquired Denali, they folded MemCon into CDNLive, Cadence’s series of user group meetings, the largest of which was in San Jose so there was no standalone MemCon 2011.

But for 2012, MemCon is back again. Tuesday September 18th at the Santa Clara convention center. The full agenda is not yet available (EDIT: yes it is, it is here) but current sponsors and speakers include

  • Agilent
  • Cadence
  • Discobolus Designs
  • Everspin
  • Kilopass
  • Micron
  • Objective Analysis
  • Samsung

MemCon is free to attend but the number of places is limited and so you must pre-register and can’t just show up at the last minute. The registration page for MemCon is here.


The Future of Lithography and the End of Moore’s Law!

The Future of Lithography and the End of Moore’s Law!
by Paul McLellan on 07-24-2012 at 10:35 pm

Thisblog with a chart showing that the cost of given functionality on a chip is no longer going to fall is, I think, one of the most-read I’ve ever written on Semiwiki. It is actually derived from data nVidia presented about TSMC, so at some level perhaps it is two alpha males circling each other preparing for a fight. Or, in this case, wafer price negotiations.

However, I also attended the litho morning at Semicon West last week, along with a lot more people than there were chairs (I was smart enough to get there early). I learned a huge amount. But nobody really disputed the fact that double and triple patterning make wafers a lot more expensive than when we only need to use single patterning (28nm and above). The alternatives were 3 new technologies (or perhaps a combination). Extreme ultra-violet lithography, direct write e-beam and directed self assembly. EUV, DWEB or DSA. Pick your acronym.

I wrote about all of them in more detail. Here are links. Collect the whole set.

I talked to Gary Smith who had been at an ITRS meeting. He says ITRS are not worried and that they think 450mm (18″) wafers will solve all the cost issues. I haven’t seen any information about the expected cost differences between 300mm and 450mm but for sure it is not negligible. Twice as many die for how much more per wafer? A lot of 450nm equipment needs to be created and purchased too, although presumably in most cases you only need half as much of it for the same number of die.

Gary also had lunch with the ITRS litho people and they still see EUV as the future. After listening to these presentations I’m not so sure. The issue everyone is focused on (see what I did there?) is the lack of a powerful light source for EUV. But the mask blank defect issue and the lack of a pellicle defect issue also look like killer problems.

Maybe I shouldn’t worry. After all, all these people know way more than I do about lithography. I’m a programmer by background, after all.

But I was at the common platform technology forum in March and Lars Liebmann of IBM said: “I worked on X-ray lithography for years and EUV is not as far along as X-ray lithography was when we finally discovered it wasn’t going to work.”

Now that’s scary.

The reason this is so important is that Moore’s law is not really about the number of transistors on a chip, it is about the cost of electronic functionality dropping. If the graph above turns out to be true, it means a million gates will never get any cheaper. That means an iPad is as cheap as it will ever be. An iPhone is never going to just cost $20. We will never have cell-phones that are so cheap that like calculators we can give them away. Yes, we may have new sexy electronic devices. But the cost of the product at introduction is the cost that it will always be. It won’t come down in price if we just wait, as we’ve become used to. Electronics will become like detergent, the same price year after year.


The Total ARM Platform!

The Total ARM Platform!
by Daniel Nenni on 07-24-2012 at 7:30 pm

In the embedded world that drives much of today’s ASIC innovation, there is no bigger name than ARM. Not to enter the ARM vs. Intel fray, but it’s no exaggeration to say that ARM’s impact on SoCs is as great as Intel’s on the PC. Few cutting edge SoCs are coming to market that do not include some sort of embedded processor. And a disproportionately large number of those processors are from ARM.

It stands to reason then, that the capability to design and implement ARM into ASIC SoCs is paramount and so I used some of my time at DAC to check out what was happening in the ARM design and implementation space. To that end, I was very pleased with what I found at the GUC demonstration.

For those few of you who may not know, GUC, orGlobal Unichip Corp., is leading the charge into a new space called the Flexible ASIC Model[SUP]TM[/SUP]. The Flexible ASIC Model, according to GUC, allows semiconductor designers to focus on their core competency while providing a flexible handoff point for each company, depending on where their core competency begins and ends. The model accesses foundry design environments to reduce design cycle time, provides IP, platforms and design methodologies to lower entry barriers, and integrates technology availability (design, foundry, assembly, test) for faster time–to–market. In a nutshell, the company should have a great deal of insight into how to integrate ARM based processors into ASIC innovation.

Not surprisingly, GUC has dedicated significant resources to successfully embedding ARM processors into ASIC designs. The service covers a robust and proven hardening flow that targets leading edge manufacturing process technologies, ARM-specific IP and design, successful test chips for ARM926, ARM1176 and the Cortex series, software support and a development platform that includes fast system prototyping.

The ARM hardening process starts with RTL validation that includes specification confirmation and memory integration. The next step, synthesis, covers critical path optimization and timing constraint polishing. Design for test (DFT) includes MBIST integration, scan insertion and compression, at-speed DFT feature integration and test coverage tuning. Place and route services cover floor planning and placement refinement, timing closure, dynamic and leakage power analysis, IR/EM analysis, dynamic IR analysis, design for manufacturing (DFM), and DRC/LVS. Final quality assurance (QA) covers library consistency review, log parsing, report review and checklist item review.

GUC’s ARM core hardening service also includes document deliverables including application nodes and simulation report. Normally, after receiving customers’ specification requests, GUC provides a preliminary timing model within one month and completes the design kit within two months.

Most importantly, their methodology has been proven. GUC recently broke the one Gigahertz barrier with an ARM Cortex-A9 processor and their history with ARM stretches back over a decade. During that time, the company has successfully run more than 90 ARM core tape outs with proven production at high yields for different applications (high performance, low power) on multiple TSMC process nodes (28nm, 40nm, 65nm, 90nm).

The importance that ARM cores will play as ASIC SoC innovation moves forward is still largely untold. But what is clear is that an ARM hardening process will be required for ASIC success. Given the complexity, this may be a difficult service to move in-house and so finding ASIC companies with critical, proven ARM hardening capabilities will become an increasingly important ingredient in the success formula.



Media Tablet Strategy from Google and Microsoft: illusion about the effective protection of NDAs…

Media Tablet Strategy from Google and Microsoft: illusion about the effective protection of NDAs…
by Eric Esteve on 07-24-2012 at 10:26 am

Extracted from an interesting article from Jeff Orr from ABI research, “We have all heard about leaked company roadmaps that detail a vendor’s product or service plans for the next year or two. Typically, putting one’s plans down in advance of public announcement has two intended audiences: customers who rely on roadmaps to demonstrate that a vendor has “staying power,” and supplier partners who communicate commitment to the supply chain and relay future requirements. These plans generally are well guarded secrets and those getting to view them often have to sign a non-disclosure agreement saying they will not share the information with other parties. But what happens when suppliers decide to enter the market and compete head-on with those vendors that have confided their companies’ futures?

This article is targeting Microsoft and Google strategies, which is basically to start competing with their (probably former?) partners, the Media Tablet manufacturers. In the conclusion, the author doesn’t look overoptimistic about the positive return from such a strategy: if it’s successful or if the OS vendor can finally reach level of sales for his tablet comparable to Apple, and by the way the same level of profit, as this should be the ultimate goal (!), then “the OS vendors will find it increasingly difficult to rebuild the former trust with the device ecosystem”. And, if the OS vendor fails, “they will have caused irreparable harm to the mobile device markets”. Look like a superb lose-lose strategy, isn’t it?

The important information which can also be found in this article, even if it is relatively hidden, is: theprotection given by an NDA is an illusion!

In other words, signing an NDA with a partner (who could become a competitor) is absolutely useless. You may protect a technology or some product features with patents. But a NDA has never prevented a so-call partner to steal your idea, or to duplicate your product roadmap! I remember one of my customer, at the end of the 1990’s, developing and ASSP with our ASIC technology. The device was ARM based modem providing Ethernet connection for printers and the like; it was a good business for both of us, generating large volumes, when this customer decided to challenge our prices and submit a RFQ to one of our competitor (based in Korea, with a “S” at the beginning of the name). This customer finally stayed with us, but less than one year later, Sxxx was launching a direct competing product! I am sure that our customer has signed a NDA with this ASIC supplier…

I firmly believe that NDA are USELESS (except if you love to waste time in additional legal work). I also firmly believe that I will continue to sign NDA with customers or partners for a long time, just because that’s the way it works!

Eric Esteve from IPnest


How Many Licenses Do You Buy?

How Many Licenses Do You Buy?
by Paul McLellan on 07-23-2012 at 6:16 pm

An informal survey of RTDA customers reveals that larger companies tend to buy licenses based on peak usage while smaller companies do not have that luxury and have to settle for fewer licenses than they would ideally have and optimize the mix of licenses that they can afford given their budget. Larger companies get better prices (higher volume) but the reality is that often they still have fewer than they would like. Or at least fewer than the engineers who have to use the licenses would like.

Licenses are expensive so obviously having too many of them is costly. But having too few is costly in its own way: schedules slip, expensive engineers are waiting for cheap machines, and so on. Of course licenses are not fungible, in the sense that you can use a simulation license for synthesis, so the tradeoff is further complicated by getting the mix of licenses right for a given budget.

There are two normal ways to change the mix of licenses for a typical EDA multi-year contract. One is at contract renewal to change the mixture from the previous contract. But large EDA companies often also have some sort of remix rights, which lets the customer exchange lightly (or never) used licenses for ones which are in short supply. There are even more flexible schemes for handling some peak usage, such as Cadence’s EDAcard.

Another key piece of technology is to have a fast scheduler which can move jobs from submission to execution faster. When one job finishes and gives up its license, that license is not generating value until it is picked up by the next job. Jobs are not independent (in the sense that the input to one job is often the output from another). The complexity of this can be staggering. One RTDA user has a task that requires close to a million jobs and involves 8 million files. As we need to analyze at more and more process corners, these numbers are only going to go up. Efficiently scheduling this can make a huge difference to the license efficiency and to the time that the entire task takes.

There are a number of different ways of investigating whether the number of licenses is adequate and deciding what changes to make.

  • Oil the squeaky wheel: some engineers complain loudly enough and the only way to shut them up is to get more licenses.
  • Peak demand planning: plan for peak needs. This is clearly costly and results in relatively low average license use for most licenses
  • Denials: look at logs from the license server. But if using a good scheduler like NetworkComputer there may be no denials at all but that is still not a sign that the licenses are adequate
  • Average utilization: this works well for heavily used tools/features whereas for a feature that is only used occasionally the average isn’t very revealing and can mask that more licenses would actually increase throughput
  • Vendor queueing: this is when instead of getting a license denial the request is queued by the license server. A good scheduler can take advantage of this by starting a limited number of jobs in the knowledge that a license is not yet available but should soon be. It is a way of “pushing the envelope” on license usage since the queued job has already done some preliminary work and is ready to go the moment a license is available. But as with license denial, vendor queuing may not be very revealing since the job scheduler will ensure that there is only a small amount.
  • Unmet demand analysis: this relies on information kept by the job scheduler. It is important to take care to distinguish jobs that are delayed waiting for a license versus jobs that are delayed waiting for some other resource, such as a server. Elevated levels of unmet demand are a strong indicator of the need for more licenses.


A more formal approach can be taken with plots showing various aspects of license use. The precise process will depend on the company, but the types of reports that are required are:

  • Feature efficiency report: the number of licenses required to fulfill requests of specific software 95%, 99% and 99.9% of the time
  • Feature efficiency histograms: showing the percentage of time each feature is in use of a time frame, and the percentage of time that individual licenses are actually used
  • Feature plots: plots of a period of time showing capacity, peak usage and average usage, along with information on licenses requests, grants and denials


These reports can be used to make analyzing the tradeoffs involved in licenses more scientific. Of course it is never going to be an exact science, the jobs running today are probably not exactly the same ones as will run tomorrow. And, over time, the company will evolve: new projects get initiated, new server farms come online, changes are made to methodology.

No matter how many licenses are decided upon, there will be critical times when licenses are not available and projects are blocked. The job scheduling needs to be able to handle these priorities to get licenses to the most critical projects. At the same time, the software tracking usage must be able to provide information on which projects are using the critical licenses to allow engineering management to make decisions. There is little point, for example, in redeploying engineers onto a troubled project without also redeploying licenses for the software that they will need.


Libraries Make a Power Difference in SoC Design

Libraries Make a Power Difference in SoC Design
by Daniel Payne on 07-23-2012 at 4:37 pm

ken brock

At Intel we used to hand-craft every single transistor size to eek out the ultimate in IC performance for DRAM and graphic chips. Today, there are many libraries that you can choose from for an SoC design in order to reach your power, speed and area trade-offs. I’m going to attend a Synopsys webinar on August 2nd to learn more about this topic and then blog about it.

I met the webinar presenter Ken Brock back in the 80’s at Silicon Compilers, the best-run EDA company that I’ve had the pleasure to work at.

WebinarOverview:Mobile communications, multimedia and consumer SoCs must achieve the highest performance while consuming the minimal amount of energy to achieve longer battery life and fit into lower cost packaging. Logic libraries with a wide variety of voltage thresholds (VTs) and gate channel lengths provide an efficient method for managing energy consumption. Synopsys’ multi-channel logic libraries and Power Optimization Kits take advantage of low-power EDA tool flows and enable SoC designers to achieve timing closure within the constraints of an aggressive power budget.

This webinar will focus on:

  • How combining innovative power management techniques using multiple VTs/channel lengths in different SoC logic blocks delivers the optimal tradeoff in SoC watts per gigahertz
  • Ways to maximize system performance and minimize cost while slashing power budgets of SoC blocks operating at different clock speeds

Length: 50 minutes + 10 minutes of Q&A

Who should attend: SoC design engineers, system architects, project managers



Ken Brock, Product Marketing Manager for Logic Libraries, Synopsys

Ken Brock is Product Marketing Manager for Logic Libraries at Synopsys and brings 25 years of experience in the field. Prior to Synopsys, Ken held marketing positions at Virage Logic, Simucad, Virtual Silicon, Compass Design Systems and Mentor Graphics. Ken holds a Bachelor’s Degree in Electrical Engineering and an MBA from Fairleigh Dickinson University.


EUV: No Pellicle

EUV: No Pellicle
by Paul McLellan on 07-22-2012 at 10:00 pm

There’s a dirty secret problem about EUV that people don’t seem to to be talking about. There’s no pellicle on a EUV mask. OK, probably you have no idea what that means, a lot of jargon words, nor why it would be important, but it seems to me it could be the killer problem for EUV.

In refractive masks, you print a pattern on a plate of quartz. Then you cover it with another layer known as the pellicle (I speak French and that is the name for film in the pre-digital era camera, but it seems to mean a thin skin of any type). It’s basically a cover so nothing can ever get to the mask that wasn’t already there. So any contamination that gets on the reticle is actually on the pellicle. During exposure in the stepper, 193nm light is transmitted through the mask onto the photoresist on the wafer. But the light is focused in a way that the mask itself is the important part of the beam and contamination on the pellicle (unless totally gross, obviously) doesn’t affect anything, so the defect is de-focused. Like a speck of dust on the lens of your iPhone, it just won’t show up. In lithographic terms, it doesn’t print.

EUV masks are reflective and can’t have a pellicle, because the light would not go through it (EUV is absorbed by everything which is why EUV systems have to be in a vacuum, and why the mirrors are so complicated, a regular mirror would absorb everything). But that means that any contamination on the mask is in the focal plane that we care about and will make it onto the wafer. So the standards for cleanliness inside an EUV stepper are insane. After all, a single particle on a mask will be stepped across hundreds of wafers. If it is in an early layer, then dozens of process steps will take place and eventually…zero yield. And by the time this is realized it is weeks later. In practice, no particles can be present at the time of exposure. That’s a high bar. Particles on the pellicle don’t really matter, which is what we have got used to. EUV is different.

So keeping the mask clean is a major issue. There is lots of work going on on how to keep EUV masks clean but the standards are essentially perfection. EUV masks are 4X the size on the wafer so at 20nm the problem is particles around 50-80nm (big enough to cut a line or break contact). But that is still tiny. Further, masks cannot be cleaned all the time, they are in the stepper most of the time. And there is no way, today at least, to detect, while they are in use, that a problem has arisen.

This is a huge change. At previous process nodes contamination could occur on the wafer (knocking out a die) or on the pellicle (having no effect). Now contamination on the mask affects not just a single die but all die until it is cleaned.


Re-defining Semiconductor Collaboration!

Re-defining Semiconductor Collaboration!
by Daniel Nenni on 07-22-2012 at 7:00 pm

GlobalFoundries did a nice response to my “How has 20nm Changed the Semiconductor Ecosystem?” and redefined the word collaboration. Our industry is plagued with sound bites and acronyms so let us agree on a semiconductor ecosystem definition of collaboration.

Mojy Chianis senior vice president, design enablement at GLOBALFOUNDRIES. He is responsible for global design enablement, services, and solutions and is the primary technical customer interface for the company. I have worked with Mojy for many years when he was at Conexant and Altera and have nothing but respect for him, especially when he buys me lunch. I finally got Mojy to blog and his first one “Re-Definig Collaboration” is right on the money, literally.

The concept of collaboration – when two or more partners take on a shared objective to meet a mutually defined and beneficial goal – is no longer optional if you are in the semiconductor business. Time, cost and complexity have made the ‘go it alone’ approach obsolete. On the manufacturing side, only a handful of companies have the wherewithal to bring next generation capacity on line because of unprecedented cost and difficulty.

I’m okay with that definition. Are you? Is there anything else to ad?

To address the chasm problem areas, an IDM-type interface is required between process teams at the foundry vendors and design teams at the fabless companies…A key issue is who establishes and pays for these IDM-type disciplines? Fabless company, foundry, or both? It is likely that many of the costs and disciplines will need to be shared.

I couldn’t agree more. I do believe however that the foundries are the driver and must lead the way on this. Meanwhile the fabless companies will continue to buy wafers based on price and delivery, right?

At the heart of collaboration today is a new type of relationship that borrows from the best of both the traditional IDM and foundry models. Our relationships with customers cannot be the ‘throw-it-over-the-wall’ approach that defined previous foundry models. We must be in lock step with customers’ internal strategies to the point that we both have skin in the game. Shared investment and success are hallmarks of collaboration in the modern foundry model.

Absolutely. Samsung calls it Simulated IDM. TSMC calls it Open Integration Platform (OIP). It will be interesting to see what label Mojy comes up with here.

In future posts in this space I will explore in more depth, from my vantage point of overseeing GLOBALFOUNDRIES’ design enablement efforts, key issues and approaches to enhancing efficiencies through collaboration throughout the ecosystem. These include how EDA and IP relationships must change, addressing critical requirements such as power and routability, silicon-verified design flows, DFM strategies, innovative technologies such as double patterning and HKMG, as well as some fundamentally new concepts like design-enabled manufacturing (DEM), which changes the perspective of where and how critical information is used to optimize the chip development process.

Now we are getting to the meat of the semiconductor sandwich! Foundry EDA and IP relationships MUST CHANGE! I will blog more on this later but I would like to hear from the crowd what really needs to be done here. Without design enablement we would not have design so let’s prioritize this to the highest level!

HOW DO THE EDA AND IP FOUNDRY RELATIONSHIPS NEED TO CHANGE?


Directed Self Assembly

Directed Self Assembly
by Paul McLellan on 07-19-2012 at 9:00 pm

At Semicon, Ben Rathsack of Tokyo Electron America talked about directed self assembly (DSA) at the standing-room only lithography morning. So what is it? Self assembly involves taking two monomers that don’t mix and letting them polymerise (so like styrene forming polystyrene). Since they won’t mix they will form up into separate areas for the two polymers. If you do nothing else you will end up with somewhat random patterns like a fingerprint. But if you provide guides, either physical guides by putting material on the wafer at a coarse density, or chemical guides by putting a thin coat of something that attracts one of the polymers, instead of a random pattern you get a sort of amplification of the pattern you laid down but at much finer grain.

For example, if you put down material at 80nm spacing you can end up with the two polymers lining up and alternating at 28nm.


Holes (for contact/via cuts) are trickier. It is easy enough to get a honeycomb pattern with the two polymers but a grid is harder. But it turns out that by adding a guide in the form of a larger trench, and then putting the polymers into the trench that they self assemble (if in the right ratio) into an outer polymer with a line of holes up the middle.

You can’t get a chip to completely self-assemble itself. If it works you can use DSA on interconnect layers to form lines and spaces, and on contact layers to create the contacts. You will need cut masks to actually cut the interconnect. You will probably need multiple masks (assuming the cut masks are 193nm light and not EUV) but in principle you can make a very small cut by essentially over-exposure, provided you don’t want another cut in the same area on the same mask.

A further advantage of DSA is that it can even repair defects in the guide structures, as is shown in this picture from IMEC.

The big advantage of DSA, if it can be made to work, is that it uses existing equipment. In fact it doesn’t even need the latest generation. It has the potential to be much cheaper than EUV (never mind that EUV might not work out). It reminds me of the first time I heard about CDMA encoding for cell-phones. The idea is elegant but surely in practice, I thought, it will never work. But every phone in Korea and every phone on Verizon uses CDMA. So maybe, one day, every fab will be using DSA. IMEC has the first one already.


Electronics markets showing signs of recovery

Electronics markets showing signs of recovery
by Bill Jewell on 07-19-2012 at 8:10 pm

attachment

Electronics markets bounced back strongly in 2010 from the 2008-2009 recession. The recovery stalled in 2011 as a series of natural and human-made disasters hit various parts of the world. Japan was hit by an earthquake and tsunami in March 2011. Thailand was affected by floods which disrupted HDD production and thus impacted PC production throughout the world. The European financial crisis led to economic weakness in most of the European Union (EU).

Recent government data on electronic production and orders shows most regions are beginning to recover from the 2011 slowdown. The chart below shows three-month-average change versus a year ago in local currency for electronics orders (U.S. and EU) and production (China and Japan). The data is through May 2012, except for the EU which ist hrough April. China continues to show the most robust growth, with double digit growth since the beginning of 2010. U.S. electronics orders experienced 12 months of year-to-year declines from March 2011 through January 2012, but turned positive in February 2012, reaching 6% growth in May. Japan electronics production change was negative for 16 months, with declines greater than 20% for six of those months. In May 2012, Japan electronics production turned positive at 2.3%. EU electronics orders remain weak, with April 2012 the twelfth consecutive month of year-to-year declines.

Unit shipments of PCs, mobile phones and LCD TVs have not yet reflected the above recovery. International Data Corporation (IDC) estimates PC unit shipments versus a year ago have been flat or in the low single-digit range for seven quarters through 2Q 2012. A bright spot has been media tablets, which have shown explosive growth since Apple released its iPad in 2Q 2010. Numerous competitors to the iPad have emerged from Samsung, Amazon and others. Media tablets in many cases are displacing PC sales. Combining unit shipments of PCs and media tablets results in a more realistic picture of this market segment. The combined data shows sturdy growth in the double digit range through 1Q 2012.


Mobile phone shipments were fairly sound with double-digit growth through the first three quarters of 2011. Shipments have weakened since, with 1Q 2012 down 1.5% from a year ago. Within the mobile phone market smart phones have shown vigorous growth, over 40% for the last three quarters. IDC estimated smart phones accounted for 36% of the total mobile phone market in 1Q 2012, double the percentage from two years earlier. Smart phones are a key driver for the semiconductor market due to the high semiconductor content. Smart phones also benefit mobile service providers through increased data usage and provide growth opportunities for app developers and accessory makers.

LCD TV unit shipments have also been weakening over the last two years, according to DisplaySearch. 1Q 2012 LCD TV units were down 3% from a year ago compared to year-to-year growth averaging over 30% in 2010 and 7% in 2011. 3D LCD TVs are emerging as a meaningful segment of the market, accounting for 14% of LCD TV shipments in 1Q 2012 versus just 4% in 1Q 2011.

Despite the overall weakness in electronics and semiconductor markets, some signs are pointing towards a resumption of healthy growth. Two of the key drivers of the new recovery, media tablets and smart phones, have just emerged as significant markets in the last few years. 3D TVs may drive growth in an overall flat TV market. Economists generally expect the overall world economy will show higher growth in 2013 than in 2012. An improving economy and the new electronics market drivers should lead to relatively strong electronics and semiconductor markets in 2013.