Synopsys IP Designs Edge AI 800x100

Are 28nm Transistors the Cheapest…Forever?

Are 28nm Transistors the Cheapest…Forever?
by Paul McLellan on 09-17-2013 at 10:43 am

It is beginning to look as if 28nm transistors, which are the cheapest per million gates compared to any earlier process such as 45nm, may also be the cheapest per million gates compared to any later process such as 20nm.

What we know so far: FinFET seems to be difficult technology because of the 3D structure and so the novel manufacturing required but seems to be stable once mastered. Intel ramped it at 22nm and TSMC says they are on-track to have it at 16nm. What Intel doesn’t have at 22nm is double patterning, and TSMC does at 20nm. It seems to have severe variability problems even when mastered. TSMC have not yet ramped 20nm to HVM so there is still an aspect of wait-and-see there.

The cheap form of double patterning is non-self-aligned, meaning that the alignment of the two patterns on a layer is entirely up to the stepper repeatability which is of the order of 4nm apparently. Of course this means that there is huge variation in any sidewall effects (such as sidewall capacitance) since the distance between the “plates” of the capacitor may vary by up to 4nm. This is variability that is very hard to remove (the stepper people of course are trying to tighten up repeatability, of course, which will be needed in any case for later processes). Instead EDA tools need to analyze it and designers have to live with it, but the margins to live with are getting vanishingly small.

There is a more expensive form of double patterning that is self-aligned using a spacer or mandrel. The material required on the wafer is laid down. The spacer is laid down on top and then the edges of the spacer are used to create the pattern as sidewalls and then the spacer is removed. The pattern is then used to etch the underlying material. This involves a lot more process steps and is a lot more expensive, but does has less variability since the two sidewalls are closely aligned due to the way they were manufactured. It looks like we will need to use this approach to construct the FinFET transistors and their gates for 10nm and below.

A general rule in fabrication is to touch the wafer as few times as possible. Double patterning inevitably drives this up. One way to get it down is to use bigger wafers and, of course, there is a big push towards 450mm wafers. These provide about the same reduction in cost per million transistors as we used to get from a process generation (where the rule of thumb was twice as many transistors with a cost increase of 15% per wafer leaving about a 35% cost reduction per million transistors). But 450mm reduces the cost of all processes, and so probably the only thing that will ever be cheaper than 28nm on 300mm wafers will be 28nm on 450mm wafers. Or perhaps 28nm on 300mm wafers running in a fully-depreciated fab.

The other hope for cost reduction is EUV lithography. I’m skeptical about it, as you know if you’ve read my other blogs about it. Even if it works people appear to be planning to do 10nm without it (except in pilot stuff). EUV is almost comical if you describe it to someone. Droplets of molten tin are shaped with a small laser. Then a gigawatt-sized power plant blasts the molten tin with half a dozen huge lasers, vaporizing it and producing a little bit of EUV light. But everything absorbs EUV light so everything also has to be in a vacuum. Then the light is bounced of half a dozen mirrors and a reflective mask. And I use “reflective” in a relative way, since only about 30% of the light is reflected per mirror since these are actually mirrors that work by interference and Bragg reflection because a regular polished metal mirror would simply absorb the EUV. So maybe 4% of the light reaches the photoresist. And if that isn’t enough, the masks cannot be made defect free. And contamination on the mask will print since we (probably) can’t put a pellicle on it to keep contamination out of the focal plane since the pellicle will absorb all the EUV too. But maybe it will all come good. After all, when you first hear about immersion lithography or CMP they sound pretty unlikely ways to make electronics too.

If this scenario is true, there are a couple of big problems. The first is that electronics will stop getting cheaper. You can have faster processors, lower power, more cores or whatever. But it will cost you. In the past we have always had a virtuous cycle where costs get reduced, performance/power improve and design size increases. So even if you didn’t want to move from 90nm to 65nm for performance, power or size reasons, the cost reduction made you do it anyway. That will no longer be true. Yes, Apple’s Ax chips for high-end smartphones will move even if the chips cost twice as much: in a $600 phone you won’t notice. But the mainstream smartphone market, and the area with predicted high growth, is sub $100. They will all have to be made at 28nm for cost reasons, and make do without the stuff 20nm and below offers. Products that can support a premium for improved performance will benefit, of course, but we’ve never been in an area where next year’s quadcore chip costs twice what this year’s dual core chip did.

The other big problem is that if only a few designs move to these later nodes, the bleeding edge designs that really need the performance, will that be enough to justify the multi-billion dollar investment in developing the processes and building the fabs. Those leading edge smartphone, router and microprocessor chips can go to 22/20nm for a year, then move to 16/14nm. But then…crickets. All the other designs can’t afford to pay the premium and stay at 28nm. Chip design will be like other industries, such as batteries say, improving at most by a few percent per year and no longer with any exponential component.

To be fair, Intel have said publicly that they see costs continuing to come down. Various theories are around as to why this is. It seems likely that they believe it rather than just posturing. Maybe they know something nobody else does. I know equipment people who say that they no longer get any access to Intel fabs so don’t really know everything their equipment is being used for. Maybe they are mistaken. Or maybe it is true for Intel who are transitioning from a very high-margin microprocessor business that is not very cost-sensitive to a foundry/SoC business that is very cost-sensitive, and so are also transitioning from not having good wafer costs to being forced to be competitive with everyone else. I’ve said before that managers at Intel often think that they are better than they are since there is so much margin bleedthrough from microprocessors that everyone else looks good. Maybe this is just another facet of that phenomenon.

See also my report on EUV from Semicon West in July.
See also my take on Intel’s cost-reduction statements.


TSMC’s 16FinFET and 3D IC Reference Flows

TSMC’s 16FinFET and 3D IC Reference Flows
by Paul McLellan on 09-17-2013 at 2:01 am

Today TSMC announced three reference flows that they have been working on along with various EDA vendors (and ARM and perhaps other IP suppliers). The three new flows are:

  • 16FinFET Digital Reference Flow. Obviously this has full support for non-planar FinFET transistors including extraction, quantized pitch placement, low-vdd operation, electromigration and power management.
  • 16FinFET Custom Design Reference Flow. This supports the non-digital stuff. It allows full customer transistor level design and verification including analog, mixed-signal, custom digital and memory.
  • 3D IC Reference Flow, addressing vertical integration with true 3D stacking using both TSV through active silicon and/or using interposers.


There have been multiple silicon test vehicles. The digital reference flow uses an ARM Cortex-A15 multicore processor as a validation vehicle and helps designers understand the challenges of full 3D RC modeling and quantized transistor widths, which are the big “new” gotchas in the FinFET world. The flow also includes methodology and tools for improving PPA in 16nm including low voltage operation analysis, high resistance layer routing optimization, path based analysis and graph based analysis correlation to improve timing closure.

By definition there is less automation in the custom reference flow because it’s custom and the designer is expected to do more by hand. But obviously it includes the verification necessary for compliance with 16nm manufacturing and reliability requirements.

The 3D IC flow allows everything to move up into the third dimension. This is still work in progress so I don’t think this will be any type of final 3D flow. But it supports what you would expect: the capability to stack die using through-transistor-stacking (TTS), through-silicon-vias & microbumps, backside metal routing, TSV to TSV coupling extraction.

So what is TTS? It is TSMC’s own name for TSV on wafers containing active devices (as opposed to on interposers, which typically only contain metal routing and decaps, where they still use the TSV name). The 3D test vehicle has stacked memories on top of 28nm SoC logic die (connected via microbumps). The 28nm logic die has TSVs through active silicon and connects to the backside routing (also called re-distribution layer or RDL) and C4 bumps on the backside of the logic die. The bumps then connect to standard substrate on the module. So this is true 3D, not 2.5D where die are bumped and flipped onto an interposer, and only the interposer (which doesn’t contain active devices) has TSVs. One of the challenges of TSVs is that the stress of manufacturing them alters transistor threshold voltages in the vicinity, and probably other stuff I’ve not heard about.


So FinFETs are coming at 16nm and the flows are ready to start designs, already validated in silicon. Plus a true 3D More than Moore flow.

OIP is coming up on October 1st. I’m sure that one of the keynotes will have some more about 16nm and 3D. For details and to register go here.


How to Design an LTE Modem

How to Design an LTE Modem
by Paul McLellan on 09-16-2013 at 4:24 pm

Designing an LTE modem is an interesting case study in architectural and system level design because it is pretty much on the limit of what is possible in a current process node such as 28nm. I talked to Johannes Stahl of Synopsys about how you would accomplish this with the Synopsys suite of system level tools. He is the first to admit that this is not a push-button flow where everything flows cleanly from one tool to the next, but more of a portfolio of technologies that can be used to get a modem done. Another complication over previous generations is that multiple radios can be used simultaneously.

LTE is actually a whole series of different standards with different uplink and downlink data rates, but one thing is constant: no matter what the data rate, the power dissipation of the modem must be such that the battery of the phone will last all day. So efficient tradeoff analysis is required to meet power and performance goals.

A high end LTE modem requires approximately 1 TOPS/second at 1W. To get there requires a complex architecture in which things happen in parallel. The picture above shows the type of architecture involved with dedicated FFT units and multiple SIMD execution units.

In principle it is possible to design a modem entirely in software, but the power dissipation would be unacceptably high. It is also possible to design highly optimized RTL but the design cycle would stretch out unacceptably and it would be too inflexible to cope with changes in the standards and the phone price points.


So step 1 is architectural exploration to answer questions such as:

  • Application-level parallelism?
  • How many cores?
  • Which parts in HW and SW?
  • Memory architecture?
  • Interconnect topology?
  • Performance, power?


The verification of the architecture then requires a flow that takes both the basic block level architecture and the actual software loads as input, with a goal of refining the architecture so that the block level performance and power envelopes are defined, and the interconnectivity (such as bus widths) is determined. This can involve cycle accurate models, virtual platforms, Zebu emulation boxes and FPGA prototypes.


One possible type of block to include in the design is an application specific processor (ASIP). There are configurable processors that is one approach to modem design but it doesn’t necessary hit the sweet spot of PPA as well as an ASIP that can be created with Synopsys’s processor design tool (the old LISA that came to Synopsys via CoWare). The processor will require specialized functions useful for modems, for inverting matrices, error control coding (ECC) and so on.


One nice side-effect of the model based approach is that at the end there is a virtual platform that can be used to accelerate software development before silicon is available (and perhaps after, since control and visibility is so much better in a virtual platform). Usually people don’t set out to change their software development methodology, but once the virtual platform is created for architectural reasons then it is ideal to use for the very complex debugging (involving several loads of software running on different processors: control processor, DSP software, protocol software, hardware etc, often all with their own debuggers).

This approach doesn’t make LTE modem design easy but it does at least make it possible.

More details on platform architect, processor developer,and virtualizer.


Intel Bay Trail Fail

Intel Bay Trail Fail
by Daniel Nenni on 09-15-2013 at 5:00 pm

Now that the IDF 2013 euphoria is fading I would like to play devil’s advocate and make a case for why Intel is still not ready to compete in the mobile market. It was very clear from the keynotes that Intel is a chip company, always has been, always will be, and that will not get them the market share they need to be relevant in mobile electronics, just my devil’s advocate opinion of course.

The first argument is the Bay Trail tablet offerings which are mediocre at best. The WinSuperSite has a nice Fall Tablet Preview with pictures and everything you need to know to decide NOT to buy one. Notice there are no Bay Trail smartphones, just tablets big and small. How many people or corporations buy the same brand tablet and phone? How many people or corporations will buy a new tablet every two years like they do smartphones? I still have my iPad2, and, like my laptop, I have no plans to replace it until I absolutely have to (3-5 years). My bet is that there will be a fire sale on Bay Trail devices next year so wait until then if you really want one.

“You’ve got to start with the customer experience and work backwards to the technology.”

The second argument is the Apple 64 bit SoC announcement last week which totally eclipsed the Intel Bay Trail hype, absolutely. Why is 64 bit a big deal? The additional performance is what everybody is talking about but the real reason for 64 bits is software portability. Corporate America can now move PC based applications to Apple tablets/phones which will further accelerate the decline of Intel’s PC revenue stream. The other thing to note is that Apple is moving away from buying chips, instead they create their own custom SoCs based on a licensed ARM architecture. This allows Apple to optimize the SoC for iOS and deliver the optimum customer experience. Qualcomm and Samsung also create custom SoCs and, between the three companies, they own the mobile market. So who is Intel going to sell chips to? Certainly not the sub $50 phone makers in emerging markets. Microsoft and the legacy PC manufacturers is all that is left?

The third argument is: Do you really care what chips are inside your phone? Thanks to Intel marketing it is clearly marked that my laptop is powered by an Intel i7. For tablets and smartphones that is not the case nor will it ever be. The only reason why I know my iPhone 5 has a 32nm dual core SoC is because I work with the foundries, which is why I also know that the iPhone5s A7 SoC is a 28nm LP quad core SoC manufactured by Samsung. For those of you who think it is 28nm or 20nm silicon from TSMC you didn’t read my “TSMC Apple Rumors Debunked”. The iProduct 6 will have TSMC 20nm silicon and the iProduct 6s will be both Samsung and TSMC 14nm, my prediction.

Fourth is Intel leadership. I met the new Intel CEO Brian Krzanich (briefly) after his keynote on Tuesday. The keynote itself was good. Not too polished, sometimes they look like something out of Las Vegas. Brian is definitely an engineer and even added a Q&A session afterwards which was new. The answers to the questions however confirmed that Intel is still Intel. Will Intel deliver synthesizable cores? No. Will Intel license their IP? No. Will Intel allow their IP to be manufactured by anyone else? No. Will Intel start with the customer experience and work backwards to the technology? Absolutely not. Intel thinks they will dominate mobile electronics like they did the PC with old school benchmarking. Unfortunately, Samsung, ARM, Apple, Qualcom, Broadcom, Mediatek, Nvidia, TSMC, and the rest of the fabless semiconductor ecosystem will not allow that to happen, no way, no how.

Also read:

The Significance of Apple’s 64 Bit A7

Intel Quark: Synthesizable Core But You Can’t Have It

lang: en_US


Semiconductor Manufacturing in India?

Semiconductor Manufacturing in India?
by Pawan Fangaria on 09-15-2013 at 11:30 am


Last week I heard about the Indian Cabinet approving the proposal for setting up of two Fabs in India. One led by IBM, Tower Jazzand JP Associates(an Indian business house), and the other led by HSMC(Hindustan Semiconductor Manufacturing Co.), ST Microelectronicsand Silterra. Indian Semiconductor community including IESA (India Electronics and Semiconductor Association) has welcomed the step; indeed it must, naturally, as the community has dreamed since long about having a Fab in the country. I am specifically delighted to hear this as I get a nostalgic feeling after remembering about the 3 micron Fab during my first job at ITI (Indian Telephone Industries in those days), Bangalore in 1990s. That became obsolete long ago; also another one at SCL (Semiconductor Complex Limited at Chandigarh near Delhi) became dysfunctional. Since then any Fab has not seen the light of the day in India so far.

It’s definitely something to cheer about (as this can aid in containing a significant portion of the estimated $400B electronics revenue by 2020 within India, thereby cutting the Current Account Deficit, year over year) provided it comes up to be true in a sustainable business sense which means several things; I am going to talk about in a minute. But before that I must say that it can be sustainable only when it is able to create Net Positive Value (yes essentially positive net present value and economic value addition) for the country as well as for the world. So, what are those factors which need to be looked at and questions answered before we can celebrate the success?

A Long Term Plan – As Foundry setup is highly Capital intensive, it must be supported with a solid long term plan and financial backing. I am sure, before proposing they must have planned for this; my concern is only that the tri-party – the Government, the Indian private partner and the MNC partner firmly support the plan for say, 10 years at least. What-if scenario needs to be analyzed and firmed up sooner than later. The whole contingency plan needs to be in place now for the initiative to last forever.

Fiscal Sustenance – This specifically pertains to the Indian Government. As tax holiday, subsidy, zero duty, financial investment etc. will play an important role in promoting the Fab along with the semiconductor industry in India; this will put further pressure on already large Fiscal Deficit. How much prepared is the Government to support this?

Support Infrastructure – This is a very important aspect which needs to be looked at for smooth running of the Fab. Can a world class, sustainable infrastructure, as required by a modern Fab be provided? Say swift transportation, large quantity of pure water, uninterrupted electricity, communication, pollutant free environment etc. I am sure, these can be done, but that needs careful planning, not only within the Fab but outside as well for an effective operation which can provide positive end result.

Government Policy – This is one of the most important factors for such a massive step to be taken. The policy (that includes all kinds of subsidies, which may be tapered down in future with due conditional clauses) taken up now must be valid and stable for at least 10 to 15 years irrespective of which party is in power.

Business Leadership – When I talk about sustenance, business leadership by this consortium is a must, which is the core which emanates from the owners of the organization. They need to own the business as one entity. They need to provide the whole chip solution and not parts of it. The next comes operations, the foundry must produce at profitable cost (infrastructure and environment plays an important part in it) to remain viable. Otherwise, import of chips or its parts will continue at lower cost and the capacity of the foundry will remain idle. We have seen examples in other areas like capital goods and some commodities imports in India. Also, the foundry needs to produce what the market demands. Another aspect of business leadership – It needs to integrate with the world market in this modern globalized world, collaboration in the semiconductor ecosystem is a given.


Technology Leadership – Although I have no information about the process nodes and the like, I must say that in order to remain in business, the technology must be forward looking, otherwise it may become obsolete without giving any notice in this fast changing technological environment and we may go back to square one. Another aspect of technology leadership is that it must be superior and look up to the world market rather than only looking at Indian domestic consumption and cutting down on electronics import. That short sightedness can again lead to obsoleteness in due course of time.

India has been a service oriented market. It needs to transform itself into product oriented market. And for that there is lot more which needs to be done by the Government, by business leaders, entrepreneurs and the people in general, to change the mindset to take up ownership. It needs a positive vibrant environment where people and specifically entrepreneurs are able to see the real value created by their suppliers and employees, and reward them, and in return create greater value for themselves. It must be a win-win situation.

Let’s see where we go from here on realization of Foundry business in India. Comments welcome!


Sidense and TSMC Processes

Sidense and TSMC Processes
by Paul McLellan on 09-14-2013 at 2:21 pm

I’ve written before about the basic capabilities of Sidense’s single transistor one-time programmable memory products (1T-OTP). Just to summarize, it is an anti-fuse device that works by permanently rupturing the gate oxide under the bit-cells storage transistor, something that is obviously irreversible. Also, compared to devices that depend on sensing the presence or absence of a charge the read voltages are low and so the memory is naturally low power. The memory does require some non-standard voltages, especially for programming, but these are all internally generated by charge pumps. Another key advantage of the anitfuse approach is that it can be manufactured in a standard digital process with no additional masks or process steps required.

Sidense will be presenting at TSMC’s OIP on October 1st. The technology has been proven in both poly gate and HKMG gate-last. As a result there is broad support for TSMC processes from 40nm down to 20nm (all planar) with FinFET support currently in development. Sidense 1T-OTP has completed IP9000 assessment across many nodes with more coming later this year and next year.

Obviously the picture at the start of this article is a planar process and in FinFET the gate-oxide is around the fin. Nevertheless, the FinFET structures align well with Sidense’s OTP implementation. Compared to 20nm, the 16nm FinFET implementation has the same bit-cell architecture and OTP design, although the bit-cell and macros are smaller, with lower leakage and better performance.

There are also other Sidense products suitable for use in other TSMC processes typically used for analog, mixed-signal, high voltage etc. However, the Sidense memories only depend on the underlying standard digital process.

Betina Hold, director of R&D at Sidense, will be presenting An Antifuse-based Non-Volatile Memory for Advanced Process Nodes and FinFET Technologies at 4.30pm on the IP track (in the unenviable slot between attendees and beer). Register for OIP here. More details on Sidense’s product line here.


Back To The Future: 50th Anniversary of EDA

Back To The Future: 50th Anniversary of EDA
by Paul McLellan on 09-12-2013 at 1:03 pm

October 16[SUP]th[/SUP] at the Computer History Museum, EDAC is hosting EDA: Back to the Future to celebrate 50 years of EDA. EDAC always has a fall event of some sort and historically it has been the Kaufman Award Dinner. This year, the Kaufman Award was presented (to Chenming Hu) at 50[SUP]th[/SUP] DAC, so the fall EDAC calendar was open. Since Calma was founded in November 1963, which seems near enough to count as the beginning of EDA, EDAC decided to throw a big party – an industry reunion of sorts – to celebrate EDA’s 50[SUP]th[/SUP] Anniversary.

One purpose of this event is to raise money for the Computer History Museum’s EDA Oral Histories Collection and Exhibit, a project (driven by Doug Fairbairn) to capture and preserve the history of EDA.

I talked to Kathryn Kranen (chief EDAC honcho. Or would that be honcha?) and she said that the goals were:

  • Bring together the EDA community, past and present
  • Create a fun, highly entertaining evening, with good food and wine
  • Raise money for the cause: capturing and preserving EDA history

The museum will be open privately for us that evening. If you have not seen it since the remodel then I highly recommend it. There will be 1½ hours to mingle (with good wine!) before the banquet dinner and entertainment begins.

At dinner, each table will be “hosted” by an industry luminary. You might sit with a previous Kaufman Award recipient like Bob Brayton or Randy Bryant. Or maybe one of the founders of EDAC, Rick Carlson or Dave Millman. Perhaps a previous CEO like Jack Harding, Penny Herscher, Bernie Aronson, Rajeev Madhavan, or Sanjay Srivastava. One of the current EDAC board members: Aart de Geus, Lip-bu Tan, Wally Rhines, Simon Segars, John Kibarian, Kathryn Kranen, Ravi Subramanian, Dean Drako, Ed Cheng, or Raul Camposano. Or an investor who has focused on EDA, like Jim Hogan or John Sanguinetti.

During dinner, Bill Joyner will reprise his history of EDA from its beginnings to the future, that everyone loved at DAC but that many people (including me) missed. Later there will be the live auction complete with comedian auctioneer. So if you want to purchase lunch with Aart, a cocktail party with Kathryn, or a time-slot to pitch to an EDA-friendly VC, this might be the evening for you. Of course, there will be the more traditional auction items like luxury time-shares, a private sailing outing, professional golf lessons, a collection of wine, and restaurant gift certificates.

If you want to donate something to the auction, then contact Mike Gianfagna. If you want to donate an original workstation from early days of EDA, then contact Doug Fairbairn (and you will also qualify for free admission to the event).

Buy tickets here.



Analog Characterization Environment (ACE)

Analog Characterization Environment (ACE)
by Daniel Nenni on 09-12-2013 at 10:00 am

I’m looking forward to the 2013 TSMC Open Innovation Platform Ecosystem Forum to be held Oct. 1[SUP]st[/SUP] in San Jose. One paper in particular that has my attention is titled, “An Efficient and Accurate Sign-Off Simulation Methodology for High-Performance CMOS Image Sensors,” by Berkeley Design Automation & Forza Silicon. It is not every day that we get a chance to learn how design teams are tackling the tough verification challenges in complex high-performance applications, such as image sensors.


CMOS Image Sensor

The paper will discuss how many image sensor performance-limiting factors appear only when all of the active and passive devices in the array are modeled, including random device noise and layout parasitics. Coupled with the highly sensitive nature of image sensors, where tens of microvolts of noise can create noticeable image artifacts, these characteristics create an enormous challenge for analog simulation tools, pushing both the accuracy and capacity simultaneously.

The presentation will highlight image sensor design and verification and include a description of Forza’s verification methodology, which uses a hierarchy of models for the image sensor blocks. At higher levels of the hierarchy, the complexity of the model is reduced, but the accuracy of the global interactions between blocks is maintained as much as possible.

CMOS Image Sensor Block Diagram

Forza’s verification flow relies on the Berkeley Design Automation (BDA) Analog FastSPICE (AFS) Platform. AFS is qualified on the latest TSMC Custom Design Reference Flow and, according to Forza, has significantly improved their verification flow.

Results will highlight how the AFS Full-Spectrum Device Noise, included in the latest TSMC Custom Design Reference Flow, validates that the sensitive ADCs and readout chain will withstand the impact of device noise and parasitics. For top-level sign-off, AFS AMS enables Forza to speed up verification by using Verilog to model non accuracy-critical circuits while maintaining nanometer SPICE accuracy on blocks that were independently verified in other tools. AFS Mega has the required capacity, speed, and accuracy required for us to perform verification of over 700 signal chains at the transistor level, including extracted parasitics.


ACE Visual Distribution Analyzer – 1000 Iterations

In terms of characterization, Forza relied on BDA’s Analog Characterization Environment (ACE) to improve their characterization coverage and efficiency. Results include Monte Carlo-based analysis to predict image sensor nonuniformity due to device mismatch. Additionally, the AFS Circuit-Specific Corners, included in the latest TSMC Custom Design Reference Flow, efficiently eliminates the limitations of traditional digital process corners and generates circuit-specific corners for each measurement suitable for analog designs.

Evaluation


Datasheet


TSMC OIP

Also read: BDA Introduces High-Productivity Analog Characterization Environment (ACE)

lang: en_US


The Significance of Apple’s 64 Bit A7

The Significance of Apple’s 64 Bit A7
by Ed McKernan on 09-11-2013 at 9:00 pm

The disappointment amongst analysts is palpable. Dreams of low cost iphones for the masses were kicked away when Apple introduced their two new iphones at the same price points as the old ones. Clearly Apple is playing a different game than what most others are anticipating as the market runs to 5BU. The market has split into the land of free Android hardware and software (80% of the market) and that of the 20% profitable niche that has led Microsoft to buy Nokia in order to save the “burning platform.” The truth is every platform is on fire as the big ecosystem players try to profitably leap frog to where the dollars are. Tim Cook has never explicitly said so, but his goal is not to offer cheap iphones to the masses, rather it is to conquer the Corporate Wintel Empire in as short a period of time as possible. The refreshed iphones will support his efforts, however it is possible that A7 powered iPADs will be a more significant contributor.

Microsoft’s acquisition of Nokia’s phone business is an affirmation that in the end software must reside on a physical device and at the reach of every finger-tip. Microsoft and Intel had 30 years to refine, improve and cost reduce the PC thereby displacing all but the fewest of mainframes. They were even able to execute a mid life kicker with mobile PCs. However, in the end, cellular changed the formfactor requirements thus sending PCs to the legacy farm. The folks in Redmond see where the thread of a reduced corporate compute footprint combined with lack of control of today’s mobile leads. And yet, with as profitable Office is, they refuse to port to iOS because the hardware guys are now in control.

Apple’s introduction of the 64-bit A7 CPU with a “desktop class architecture” and over 1Billion transistors and 2X performance is messaging straight out of the Intel PR handbook. The bar has been raised and they will now start the corporate spec-manship game against their rivals. Add in the security message of iOS and we have the transmorphing of the messages that Wintel used to sell the corporate world to those that Apple will use.

Beyond the messaging, one has to think of the impact a 64 bit A7 will have across an entire platform of products. While the new CPU is at this stage overkill for the iphone, it is certainly interesting in what it could bring to the iPAD or an ARM based Mac Air. On the roadmap must surely be a multitasking iOS that requires increased CPU horsepower and the ability to address more than 4GB of memory. Apple has entered x86 Silvermont territory.

The cannibalization of the 9.7” iPAD by the iPAD mini this past year shows not only are there multiple tablet market segments but that a sustainable high end market requires more than just a larger screen. The consumer would rather surf the internet with a smaller, lighter tablet and skip the keyboard. Apple has been slow to upgrade the larger iPAD into something meaningful and perhaps it is all because they were waiting on the A7 to power a mobile that can truly offer “Office functionality.” Look for VmWare and Parallels as well as iwork to be leveraged to offload Microsoft wherever possible. As for the A7, the 102mm die size is quite interesting as it is around the same point where Intel in the 1990s would build their low end CPU targeting corporate. It is obviously an economic, wafer yielding inflection point.

It is safe to say at this point that no one in the PC or mobile industry can accurately predict the true size of the PC and tablet markets given the turnover of new products, the absorption of new technology and the inability of buyers to fully understand the productivity tradeoffs. Experimentatin leads the way which means Apple, Microsoft, Google and the rest of the players will be introducing even more products at every smaller price point divergences in order to stay close to changes in trends.

History says that consolidation follows.

With its expensive iphone 5C and 5S swimming in a sea of <$100 smartphones, Apple seems to be drawing a line in the sand in the consumer space while it builds its moat around corporate. The winner of corporate though has to win the $500-$999 mobile space and that is up for grabs. The mobile has to be very light, outfitted with a 10-11” screen and keyboard and last all day on a charge. Apple has it in the MAC Air, but it is $999 and above. The PC players are rushing in with Windows and Android tablets. Depending on what Apple introduces in the next couple months will determine if they see a return to growth in this segment or if the battle ends up raging for years.


Emerging Trend – Choose DRAM as per Your Design Need

Emerging Trend – Choose DRAM as per Your Design Need
by Pawan Fangaria on 09-11-2013 at 7:00 pm

Lately I was studying about new innovations in memory world such as ReRAM and Memristor. As DRAM (although it has become a commodity) has found its extensive use in mobile, PC, tablet and so on, that was an inclination too to know more about. While reviewing Cadence’s offering in memory subsystems, I came across this whitepaperwhich provides a comprehensive description about some of the existing and some of the upcoming (in production) DRAM interfaces along with their Pros & Cons.

Obviously, due to price pressure, DRAM business is a volume game and there is not much scope of differentiation. However, amid increasing SoC size, architecture and complexity, and more importantly mobile market driving DRAM business, it’s worth paying attention to the important demand of increased bandwidth, operating frequency and low power consumption. In view of these demands, different architectures of DRAM interfaces are emerging. Let’s take a brief look at these here, but one must see the whitepaper to know more about the actual details and also the references to those such as JEDECand HMC.

[LPDDR3 Architecture]

LPDDR3 (Low-Power Double Data Rate 3) – This interface suits well for mobile devices which require high memory density and performance and low power consumption. This also has lower I/O capacitance which helps in achieving increased bandwidth and operating frequency.

[LPDDR4 Architecture]

LPDDR4 – This is the latest standard from JEDEC, optimized for next generation mobile devices. This will provide double the bandwidth of LPDDR3 at similar power and area. It has lower page size, multiple channels and reduced command/address bus pin count. This will be in mass production in 2014.

[Wide I/O 2 Architecture]

Wide I/O 2 – This is again from JEDEC, expected to reach mass production in 2015. This supports 3D-IC packaging for PC and server applications and can be used for high-end mobile applications. It covers high-bandwidth 2.5D silicon interposer and 3D stacked die packaging for memory devices. In this architecture, designers can use EDA tools to take advantage of redundancy at the logic level to minimize device failures. Cadence Encounter Digital Implementation allows designers to route multiple redistribution layers (RDL) into a microbump or to use combination bumps. If one bump falls, the remaining bumps can carry on normal operations.

In 2.5D staking, cooling is not much of a problem. However, in 3D staking heat dissipation from the middle of the stack could become a problem and hence needs careful thermal planning.

[HMC Architecture]

HMC (Hybrid Memory Cube) – This is being developed by Hybrid Memory Cube Consortium and supported by many semiconductor and technology companies. It combines high-speed logic process technology with a stack of TSV (Through-Silicon-Via) bonded memory die. This architecture allows more DRAM I/O pins and hence provides the highest bandwidth among all architectures (as high as 400G). In comparison to LPDDR3, a single HMC can provide 15X higher performance and consume 70% less energy per bit. However, the cost of this technology is also high.

[HBM Architecture]

HBM (High Bandwidth Memory) – This is an emerging standard for graphics, defined by JEDEC (JEDEC’s HBM task force is now part of JC-42.3 sub-committee), expected to be published by late 2013. It’s expected to be in mass production in 2015. It’s stacked DRAM die using TSV technologies to support bandwidth from 128GB/s to 256GB/s.

So which memory standard works best for your design? There is no definitive answer. It really depends on the requirements of the application for power, performance and area. And not to forget, price to pay. For example, LPDDR4 should be good enough for budget mobile market, whereas computer graphics with high resolution may require HBM.

It will be worthwhile to look at the Cadence whitepaperwhich has detailed analysis of these architectures and the information about what Cadence provides in support of these architectures, such as memory controller and PHY IP. It also provides Cadence’s roadmap to offer support for upcoming DRAM architectures. Cadence also provides memory model verification IP to verify memory interfaces and ensure design correctness.