Synopsys IP Designs Edge AI 800x100

Economic news not all bad for semiconductors

Economic news not all bad for semiconductors
by Bill Jewell on 08-30-2011 at 2:06 pm



The economic news lately has been bleak. U.S. GDP grew at an anemic 0.4% in 1Q 2011 and 1.0% in 2Q 2011 – leading to increased concerns about a double-dip recession. High government debt levels in the U.S. and several European nations have contributed to volatile stock markets. The news does not seem to be any better for the semiconductor industry. According to the Semiconductor Industry Association’s (SIA) reporting of World Semiconductor Trade Statistics (WSTS) data, the semiconductor market declined 2% in 2Q 2011from 1Q 2011. The semiconductor market in 2Q 2011 was down 0.5% from a year ago after 8.2% year-to-year growth in 1Q 2011.

However the news is not all bad. Looking at the components of U.S. GDP, spending on electronics by consumers and business is still relatively strong. Business investment in equipment and software (including computers, telecom and manufacturing equipment) grew 8.7% in 1Q and 7.9% in 2Q. Consumer spending on recreational goods and vehicles (over 75% of this category is electronics) grew 15.3% in 1Q 2011 and 9.3% in 2Q 2011.

Key end markets for semiconductors are continuing to show solid growth. Total mobile phones grew at high double digit rates for the first two quarters of 2011, according to Gartner. Smartphones are driving mobile phone growth – with year-to-year growth of 85% in 1Q 2011 and 74% in 2Q. PCs declined 3.2% in 1Q 2011 versus a year ago but bounced back to 2.6% growth in 2Q, based on IDC data. Media tablets (dominated by Apple’s iPad) are growing explosively, with IHS iSuppli forecasting 245% growth in 2011. Media tablets are certainly displacing some PC sales, thus the combination of the two gives a better picture of demand. Total PC plus iPad shipments were up 7.7% from a year ago in 1Q and up 9.5% in 2Q.

What is the outlook for the semiconductor market for the rest of 2011? See more at:http://www.semiconductorintelligence.com/


Apple’s $399 Plan to Win Consumer Market in Summer 2012

Apple’s $399 Plan to Win Consumer Market in Summer 2012
by Ed McKernan on 08-30-2011 at 10:30 am

The complete destruction of the consumer PC market in the US and Europe is well within Apple’s grasp and will begin to unfold next summer. There is nothing that Intel, Microsoft or the retail channels can do to hold back the tsunami that was first set in motion with the iPad last year and comes to completion with the introduction of one more mobile product and the full launch of the iCloud service for all. The dollars that are left on the table to defend the onslaught are too insufficient to put up a fight. Collapse is at hand.
Continue reading “Apple’s $399 Plan to Win Consumer Market in Summer 2012”


Nanometer Circuit Verification Forum

Nanometer Circuit Verification Forum
by Daniel Nenni on 08-29-2011 at 2:33 pm

Verifying circuits on advanced process nodes has always been difficult, and it’s no easier with today’s nanometer CMOS processes. There’s a great paradox in nanometer circuit design and verification. Designers achieve their greatest differentiation when they implement analog, mixed-signal, RF and custom digital circuitry on a single nanometer CMOS die, running at GHz frequencies. Yet it’s these very circuits that create huge design challenges, and introduce a whole new class of verification problems that traditional approaches can’t begin to adequately address.

Fortunately there’s a group of companies bringing to market innovative solutions that focus exactly on these problems, and collaborating to hold the nanometer Circuit Verification Forum (nmCVF), on September 22[SUP]nd[/SUP] at TechMart in Santa Clara. Hosted by Berkeley Design Automation, and including technologists from selected EDA, industry and academic partners, this forum will showcase advanced nanometer circuit verification technologies and techniques. You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.

In addition to technical presentations and case studies, renowned EDA industry veteran and visionary, Jim Hogan, will give the keynote address.

Schedule
9:00- Registration
9:30- Welcome and Keynote
10:00- Morning sessions (including break)
12:30- Lunch
1:30- Afternoon sessions (including break)
4:30- Solution demonstrations and reception
6:30 – Forum wrap-up and close

Topic Areas
Application Examples
– Data converters
– PLLs and timing circuits
– High-Speed I/O
– Image sensors

Emerging Verification Technologies
– Nanometer device modeling
– Rapid prototyping including parasitic effects
– Thermal-aware circuit verification
– Variation-aware circuit design
– Circuit optimization and analysis

You should plan to attend if you’re a practicing circuit designer or a hands-on design manager, and you’re looking for high-integrity and comprehensive circuit verification solutions, focused on improving your circuit and getting it faster to market and faster to volume production.

Register HEREfor the nanometer Circuit Verification Forum, or see nm-forum.comfor more details. This event is FREE so you know I will be there!



Semiconductor Yield @ 28nm HKMG!

Semiconductor Yield @ 28nm HKMG!
by Daniel Nenni on 08-28-2011 at 4:00 pm

Whether you use a gate-first or gate-last High-k Metal Gate implementation, yield will be your #1 concern at 28nm, which makes variation analysis and verification a big challenge. One of the consulting projects I have been working on with the foundries and top fabless semiconductor companies is High-Sigma Monte Carlo (HSMC) verification technologies. It has been a bumpy two years certainly, but the results make for a good blog so I expect this one will be well read.

GLOBALFOUNDRIES Selects Solido Variation Designer for High-SigmaMonte Carlo
and PVT Design in its AMS Reference Flow

“We are pleased to work with Solido to include variation analysis and design methodology in our AMS Reference Flow,” said Richard Trihy, director of design enablement, at GLOBALFOUNDRIES. “SolidoVariation Designer together with GLOBALFOUNDRIES models makes it possible to perform high-sigma design for high-yield applications.”

Solido HSMC is a fast, accurate, scalable, and verifiable technology that can be used both to improve feedback within the design loop, as well as for comprehensive verification of yield critical high-sigma designs.

Since billions of standard Monte Carlo (MC) simulations would be required for six sigma verification, most yield sensitive semiconductor designers use a small number of MC runs and extrapolate the results. Others manually construct analytical models relating process variation to performance and yield. Unfortunately, both approaches are time consuming and untrustworthy at 28nm HKMG.

Here are some of the results I have seen during recent evaluations and production use of Solido HSMC:

Speed:

  • 4,700,000x faster than Monte Carlo for 6-sigma analysis
  • 16,666,667x fewer simulations than Monte Carlo for 6-sigma analysis
  • Completed in approximately 1 day, well within production timelines

Accuracy:

  • Properly determined performance at 6-sigma, with an error probability of less than 1e-12
  • Used actual Monte Carlo samples to calculate results
  • Provided high-sigma corners to use for design debug

Scalable:

  • Scaled to 6-sigma (5 billion Monte Carlo samples)
  • Scaled to more than 50 process variables

Verifiable:

  • Error probability was reported by the tool
  • Results used actual Monte Carlo samples – not based on mathematical estimates


Mohamed Abu-Rahma of Qualcomm did a presentation at #48DAC last June in San Diego. A video of his presentation can be seen HERE. Mohamed used Solido HSMC and Synopsys HSPICE for six sigma memory design verification.

Other approaches to six-sigma simulation include:

  • Quasi Monte Carlo (QMC)
  • Direct Model-based
  • Worst-Case Distance (WCD)
  • Rejection Model-Based (Statistical Blockade)
  • Control Variate Model-Based (CV)
  • Markov Chain Monte Carlo (MCMC)
  • Importance Sampling (IS)

None of which were successful at 28nm due to excessive simulation times and the inability to correlate with silicon. Especially the Worst-Case Distance approach, which is currently being peddled by an EDA vendor who’s name I will not mention. They claim it correlates to silicon but it does not! Not even close! But I digress…..

Being from Virage Logic and working with Solido the last two years, this blog is based on my personal experience. If you have hard data that suggests otherwise let me know and I will post it.

I would love to describe in detail how Solido solved this very difficult problem. Unfortunately I’m under multiple NDA’s with the penalty of death and dismemberment (not necessarily in that order). You can download a Solido white paper on high-sigma Monte Carlo verification HERE. There is another Solido white paper that goes into greater detail of how they solved this problem but it requires an NDA. You can also get a Webex HSMC briefing by contacting Solido directly HERE. I observed one just last week and it was quite good, I highly recommend it!


Layout for analog/mixed-signal nanometer ICs

Layout for analog/mixed-signal nanometer ICs
by Paul McLellan on 08-26-2011 at 5:24 pm

Analog has always been difficult, a bit of a black art persuading a digital process to create well-behaved analog circuits, capacitors, resistors and all the rest. In the distant past, we would solve this by putting the analog on a separate chip, often in a non-leading-edge process. But modern SoCs integrate large amounts of digital logic along with RF, analog and mixed-signal functionality on a single SoC and then manufacture it in the most bleeding edge process. This is a huge challenge.

The complexity of design rules when you get down to 28nm and below has greatly complicated the process. Traditionally custom layout is tedious and inflexible once started (you only have to think of the layout impact of a trivial digital gate-level change to see this). As a result, the layout teams don’t start layout until the circuit design is nearly complete and so they have to work under tremendous tape-out pressure. However, a more agile approach is possible using automated efforts. These reduce the effort needed, allow layout to be overlapped with circuit design and produce better (specifically smaller) layouts.

Timely design of the RF, analog and mixed-signal parts of many SoCs has become the long pole in the tent, the part of the schedule driving how fast the chip can be taped out. Part of the challenge is technical since the higher variability of silicon in 28mm and below (and above too, but to a lesser extent) threatens to make many analog functions unmanufacturable. To cope with variability and control parasitics, foundries have introduced ever more complex design and DFM rules and designers have come up with ever more elaborate circuits with more devices. Of course, both of these add to the work needed to complete designs.

The solution is an agile layout flow involving large-scale automation. The key advance is the capability to quickly lay out not just a single structure (e.g. a current mirror) in the presence of a few localized rules, but rather the entire design hierarchy in the presence of all the rules (complex design rules, area constraints, signal flow, matching, shielding etc).

Only by automating the whole layout process is it possible to move to an agile “throw away” layout to be done before the circuit design is finalized, and thus start layout earlier and do it concurrently with circuit design.

The advantages of this approach are:

  • significantly lower layout effort, since tasks are automated that were previously done by hand. This is especially the case in the face of very complex design rules where multiple iterations to fix design rule violations are avoided
  • large time-to-market improvement since the design is started earlier and takes less total time, so finishes soon after circuit design is complete
  • typically, at 28nm, a 10-20% die size reduction versus handcrafted layout

For more details of Ciranova’s agile layout automation, the white paper is here.


Will AMD and Samsung Battle Intel and Micron?

Will AMD and Samsung Battle Intel and Micron?
by Ed McKernan on 08-26-2011 at 2:00 pm

We received some good feedback from our article on Intel’s Back to the Future Buy of Micron and I thought I would present another story line that gives readers a better perspective of what may be possibly coming down the road. In this case, it is the story of AMD and Samsung partnering to counter Intel’s platform play with Micron. The initial out of the box idea of Intel buying Micron is based on my theory that whoever controls the platform wins. The new mobile environment is driven by two components, the processor and NAND flash. You can argue wireless technologies, but Qualcomm is like Switzerland supplying all comers. Intel (CPU centric) and Samsung (NAND centric) are the two likely head to head competitors. Each one needs a partner to fill out the platform. Thus, Intel with Micron and Samsung with AMD. The semiconductor world can operate in a unipolar or bipolar fashion, multipolar eventually consolidates to one of the former.

The challenge to the company who operates like a monopoly is that it slows down in the delivery of new products or fails to address new market segments. Intel has had a long run as the leading supplier of processors in laptops and notebook PCs. However, as most are witnessing now, they missed on addressing the Smartphone and tablet markets. In the notebook market, Intel could deliver processors with a 35 Watt (TDP) threshold. Now Intel is scrambling to redesign an x86 based processor that can meet the more stringent power requirements of iPhones and iPADs. The ultrabook initiative, which started in the spring, is an attempt to close the gap with tablets with a PC product that has much better battery life and is closer in weight.

It will take 2 years for the initiative to come to full completion.The new mobile world of iPADs, Smartphones and MAC Airs can trace their genealogy to version 2.0 of the iPOD. It was at this point that Steve Jobs converted Apple to a roadmap that would build innovative products around the lower power and smaller physical dimensions of NAND Flash. And with Moore’s Law behind it, NAND Flash offers a path to lower cost storage for future products that will tap into the cloud. When one looks at the bill of materials profile of the components in Apple’s iPhones and iPADs, one can see that the NAND flash is anywhere from 2 to 5 times the dollar content of the ARM processor. In the MAC Air the NAND flash content is 1/2 to slightly more that of the Intel processor. If you were to combine the three platforms, flash outsells the processor content by at least 3:1. Given current trends this will grow and therefore becomes the basis for Intel seeking to be a flash supplier. This is especially true if they can make a faster proprietary link between the processor and storage.

Turning to Samsung’s side of the platform. They obviously recognize the growing trend of NAND flash in the mobile compute platforms. Samsung should look to leverage this strength into a platform play that also includes the processor. In this case, it includes ARM and x86. Samsung will also look for ways to separate themselves from competitors such as Toshiba and Sandisk. This is where pushing ahead early to 450mm Fabs could have an impact.

During the course of the next decade, there are three major platform battles that will take place between ARM and x86 processors. Today Intel has dominance in the server and legacy desktop and notebook PCs while ARM is dominating in the SSD based Smartphones and Tablets. The crossover platform is the MAC Air with an Intel processor and an SSD. Intel has been and will likely continue to increase ASPs in the server market as a value proposition on data center power consumption. In the traditional PC space, Intel is confronted with a slow growth market that will require them to reduce ASPs in order to prevent ARM from entering the space and to avoid losing share to the SSD based platforms. There is not a direct 1:1 cannibalization in this scenario but we will understand more fairly soon. ASP declines by Intel will in one way or another be a function on how to keep their fabs full.

As one can see, there are a low of variables in determining who wins and to what degree in all three of the major platforms. If Samsung wants to be a major player in all three then it needs an x86 architecture as well as a top notch ARM design team to compete against Intel. Assuming NAND Flash will grow in revenue faster than x86 processors, then Samsung should utilize AMD’s x86 to strip the profits out of the legacy PC and the highly profitable server space. Intel will likely utilize Flash to enhance the platform in terms of improving overall performance relative to ARM. Due to the fact that Intel supplies 100% of Apple’s x86 business, they will have a more difficult time offering discounts to non-Apple customers because any discount will be immediately subtracted from Apple’s purchases. Since Apple is the growth player in the PC market, they will dictate Intel’s floor pricing. AMD is not an Apple supplier, therefore it has the freedom to experiment with x86 pricing with the rest of the PC market. To implement a complementary strategy to the MAC Air, AMD needs to make adjustments to their processor line by developing a processor that moderates x86 performance for greater graphics performance. The combined solution (or APU in AMD terminology) must be sold for $50 – $75 or more than $150 less than Intel’s solution. And finally the maximum thermal design power (TDP) of the processor should be in the rangeof 2-5W.

In the past month, many of the Taiwan based notebook OEMs have complained that they are unable to match Apple’s price on the entry level $999 MAC Air. Apple can now secure LCD panels, NAND Flash and other components at lower prices. In addition, the PC makers must pay Microsoft an O/S license fee. For these vendors to be able to compete, they must utilize a non-Intel processor. The lowest cost Intel i5 ULV processor is roughly $220 and Intel will likely not offer a lower cost ULV processor until Ivy Bridge reaches the mid life cycle of its production sometime in 2013.

On the ARM front, Samsung needs an experienced design team to develop a family of processors for smartphones and tablets. The highly successful A4 processor was designed by a group in Austin called Intrinsity, which Apple snapped up last year. Mark McDermott, one of the co-founders, and someone I once worked with at Cyrix in the 1990s, has been designing ultra low power processors for 20 years. Experience counts and Samsung is in need of processor designers who can make the performance and power tradeoffs between processor and graphics cores. AMD is overloaded with engineering talent.

The platform wars, not just processor wars, are heating up as Intel and Samsung look to gain control of the major semiconductor content going into new mobile devices, legacy PCs and data center servers. It looks to be a decade long struggle that will be better understood after 450mm fabs are in place. What may have seemed to be out of the question a few months ago (e.g. Intel buying Micron or Samsung teaming up with AMD) is likely to be up for serious consideration. Who would have guessed a month ago that Google would buy Motorola or HP exiting the PC business. The tectonic plates are shifting.


Transistor Level IC Design?

Transistor Level IC Design?
by Daniel Payne on 08-26-2011 at 1:23 pm

If you are doing transistor-level IC design then you’ve probably come up against questions like:

  • What Changed in this schematic sheet?
  • How did my IC layout change since last week?

In the old days we would hold up the old and new versions of the schematics or IC layout and try to eye-ball what had changed. Now we have an automated tool that does this comparison for us and it’s called Visual Design Diff or VDD from ClioSoft.

If you’d like to win an iPad 2 then go and play their game to spot the differences.

Also Read

How Tektronix uses Hardware Configuration Management tools in an IC flow

Richard Goering does Q&A with ClioSoft CEO

Hardware Configuration Management at DAC


Third Generation DFM Flow: GLOBALFOUNDRIES and Mentor Graphics

Third Generation DFM Flow: GLOBALFOUNDRIES and Mentor Graphics
by Daniel Payne on 08-26-2011 at 11:17 am

calibre yield analyzer

Introduction
Mentor Graphics and GLOBALFOUNDRIES have been working together for several generations since the 65nm node on making IC designs yield higher. Michael Buehler-Garcia, director of Calibre Design SolutionsMarketing at Mentor Graphics spoke with me by phone today to explain how they are working with GLOBALFOUNDRIES on a 3rd generation DFM (Design For Manufacturing) flow.

3rd party IP providers like ARM and Virage have been using this evolutionary DFM flow to ensure that SOCs will have acceptable yields. If the IP on your SOC is litho-clean then the effort to make the entire SOC clean is decreased. GLOBALFOUNDRIES has a mandate that their IP providers pass a DFM metric.

Manufacturing Analysis and Scoring (MAS)
In box A of the flow shown above is where GLOBALFOUNDRIES measures yield and the yield modeling info needed to give Mentor the design to silicon interactions. This could be equations that describe the variation in fail rates of recommended rules, or defect density distributions for particle shorts and opens.

Random and Systematic Defects and Process Variations
At 100nm and below nodes there are both random defects and process variations that limit yield. Critical Area Analysis (CAA) is used for random defects and Critical Failure Analysis (CFA) is used for systematic defects and process variations. These analysis help pinpoint problem areas in the IC layout prior to tape out.

DRC+ Pattern-based Design Rule Checking Technology
Patterns that identify low-yield areas of an IC can be defined visually then run in a DRC tool like Calibre.

Litho Friendly Design (LFD)
Calibre LFD accurately models the impact of lithographic processes on “as-drawn” layout data to determine the actual “as-built” dimensions of fabricated gates and metal interconnects. There are new LFD design kits for the 28nm and 20nm nodes at GLOBALFOUNDRIES.

Calibre LFD uses process variation (PV) bands that predict failure in common configurations including pinching, bridging, area overlap and CD variability.

In the early process development of 20nm the foundry uses Calibre LFD to predict the hot-spots and then create the design rules.

Place and Route
Integration with the Olympus-SOC™ design tool enables feed-forward of Calibre LFD results to give designers guidance on recommended layout improvements, and to enable revalidation of correct timing after modifications.

Summary
Foundries, EDA vendors and IC design companies are collaborating very closely to ensure that IC designs will have both acceptable yield and predictable performance. GLOBALFOUNDRIES and Mentor Graphics continue to partner on their 3rd generation DFM flow to enable IC designs at 28nm and smaller nodes. AMD is a leading-edge IC company using the Calibre DFM tools on the GLOBALFOUNDRIES process.

To learn more about how Mentor and GLOBALFOUNDRIES are working together you can visit the Global Technology Conference at the Santa Clara Convention Center on August 30, 2011


Mentor catapults Calypto

Mentor catapults Calypto
by Paul McLellan on 08-26-2011 at 10:36 am

Mentor has transferred its Catapult (high level synthesis) product line, including the people, to Calypto. Terms were not disclosed but apparently it is a non-cash deal. Calypto gets the product line. Mentor gets a big chunk of ownership of Calypto. So maybe the right way to look at this is as a partial acquisition of Calypto.

It has to be the most unusual M&A transaction that we’ve seen in EDA since, maybe, the similar deal when Cadence transferred SPW to CoWare. There are some weird details too: for example, current Catapult customers will continue to be supported by Mentor.

Who are Calypto? It was formed years ago to tackle the hard problem of sequential formal verification (sequential logical equivalence checking or SLEC). The market for this was people using high-level synthesis (HLS) since they didn’t have any way to check that the tool wasn’t screwing up other than simulation. There were many HLS tools: Mentor’s Catapult, Synfora (now acquired by Synopsys), Forte (still independent). More would come along later: AutoESL (now acquired by Xilinx), Cadence’s C to Silicon. But there really weren’t enough people at that time using HLS seriously to create a big enough market for SLEC.

So Calypto built a second product line on the same foundation, to do power reduction by sequential optimization. This is looking for things like “if this register could be clock-gated under certain conditions, so it doesn’t change on this clock cycle, then the downstream register can be clock gated on the following clock cycle because it won’t change.” For certain types of designs this turns out to save a lot of power. And a lot more people were interested in saving power than doing SLEC (although everyone who saved power this way needed to use SLEC to make sure that the design was functionally the same afterwards).

So now Calypto also has HLS so has a tidy little portfolio: HLS, SLEC for checking that HLS didn’t screw up, power reduction. Presumably in time some of the power reduction technology can be built into HLS so that you can synthesize for power or performance or area or whatever.

Calypto was rumored to be in acquisition talks with Cadence last year but obviously nothing happened (my guess: they wanted more than Cadence was prepared to pay). They were also rumored to be trying to raise money without success.

Mentor says they remain deeply committed to ESL and view this transaction as a way to speed adoption. I don’t see it. I’m not sure how this works financially. Catapult was always regarded as the market leader in HLS (by revenue) but Mentor also had a large team working on it. If the product is cash-flow positive then I can’t see why they would transfer it, if it cash-flow negative I don’t see how Calypto can afford it unless there is a cash injection as part of the transaction.

So what has Mentor got left to speed adoption towards. The other parts of Simon Bloch’s group (apparently he is out too) were FPGA synthesis (normal RTL leval) and virtual platform technology called Vista.

Maybe Mentor decided that HLS required the kind of focused sales force that only a start-up has. Mentor seems to suffer (paging Carl Icahn) from relatively high sales costs (although Wally says that is largely because they account slightly different from, say, Synopsys). Their fragmented product line means that their sales costs are almost bound to be higher compared to the other big guys who are largely selling a complete capability (give us all your money, or most of it, and we’ll give you all the tools you need, or most of them).

Or perhaps it is entirely financial driven. Mentor gets some expenses off their books, and reduces their sales costs a little. But without knowing the deal or knowing the low-level ins and outs of Mentor’s financials it’s not really possible to tell.

Full press release here.


20nm SoC Design

20nm SoC Design
by Paul McLellan on 08-25-2011 at 12:48 am

There are a large number of challenges at 20nm that didn’t exist at 45nm or even 32nm.

The biggest issues are in the lithography area. Until now it has been possible to make a reticle using advanced reticle enhacement technology (RET) decoration and have it print. Amazing when you think that at 45nm we are making 45nm features using 193nm light. A mask is a sort of specialized diffraction grating. But at 20nm we need to go to double patterning whereby only half the polygons on a given layer can be one a reticle and a second reticle is needed to carry the others. Of course the rules for which polygons go on which reticle are not comprehensible to designers directly. It is also likely that we are moving towards restricted design rules, where instead of having minimum spacing rules we have rules that restrict spacing to a handful of values. We’ve pretty much been doing that for contact and via layers for years but now it will affect everything. This explosion of design rules means that the design rules themselves are pretty much opaque to the designers who have to follow them and tight integration of design rule checking, RET decoration, reticle assignment and so on must be tightly integrated into both automated tools (such as place and route) and more manual tools (such as layout editors).

Another theme that goes through many blog entries here is that variation is becoming more and more extreme. The variance gets so large that it is not possible to simply guard band it or else you’ll find you’ve built a very expensive fab for minimal improvement in performance. Variation needs to be analyzed more systematically than that.

These two aspects are major challenges. Of course we have all the old challenges that designs get larger and larger moving from node to node requiring more and more productive tools and better databases. Not to mention timing closure, meeting power budgets, analysis of noise in the chip-package and board, maybe 3D TSV designs. Manufacturing test. The list goes on.

To get a clear vision of what success will require, view Magma’s webinar on 20nm SoC design here.