wide 1

Intel 22nm SoC Process Exposed!

Intel 22nm SoC Process Exposed!
by Daniel Nenni on 12-27-2012 at 9:00 pm

The biggest surprise embedded in the Intel 22nm SoC disclosure is that they still do NOT use Double Patterning which is a big fat hairy deal if you are serious about the SoC foundry business. The other NOT so surprising thing I noticed in reviewing the blogosphere response is that the industry term FinFET was dominant while the Intel invented term Tri-Gate was rarely used.

The transistor pitch – essentially the distance between two transistors – in the 22nm tri-gate technology is 80nm, which is the smallest pitch that can be produced using single-pattern lithography, Bohr says. “The next generation, 14,” he said, “we’re going to have to convert to Double Patterning to get tighter pitches.”

Mark Bohr is the infamous Intel Senior Fellow who mistakenly predicted the doom of the fabless semiconductor ecosystem. Mark is a funny guy. I remember him putting up an incomplete 22nm defect density trend slide at this year’s Intel Developers Forum and saying “Was it a mistake that I left the numbers out? Yes! Oh my goodness, how could I have done that? But, gee, time is up, so … ”

TSMC on the other hand presents their process defect density numbers every year at the TSMC Tech Symposium. Transparency equals trust in the foundry business, believe it.Back to Double Patterning, I will defer to the experts at Mentor for a complete description. Please see the Double Patterning Exposed articles for technical detail. No registration is required, just click on over.

So the question is: Why does TSMC use the extra lithography steps of Double Patterning for 20nm and Intel does not for 22nm? The answer is Restrictive Design Rules which essentially eliminates any variability in orientation of shapes on critical layers. Intel is very comfortable with incredibly restrictive design rules since they are a microprocessor manufacturer and not a pure-play foundry. Intel can micromanage every aspect of design and manufacturing down to the electron. TSMC on the other hand needs to accommodate different design requirements and intellectual property from 615 customers.In addition to more flexible metal routing, Double Patterning also enables a tighter metal pitch which will put TSMC 16nm head-to-head with Intel 14nm even though, as I explained in 16nm FinFET Versus 20nm Planar, 16nm FF leverages 20nm process technology.

It will be interesting to see how Intel tackles the Double Patterning challenge without the support of the mighty fabless semiconductor ecosystem.Which brings me to another trending topic: Is 20nm Planar a full node, half node, or everybody gonna skip node?

I can tell you as a matter of fact that the top semiconductor companies around the world will NOT skip 20nm. 20nm tape-outs are happening now with production silicon late next year. 20nm will require more processing time from GDS to wafer but it will NOT be cost prohibitive for high volume customers. You are probably familiar with the 80/20 rule where 80% of something or other is controlled by 20% of the people, in the semiconductor industry we call it the 90/10 rule where 90% of the of the silicon shipped is by 10% of the companies and you can bet that they will tape out at 20nm. Designing at 20nm planar will also make the transition to 16nm FinFET easier and I can tell you that EVERYONE will be taping out at 16nm FinFET. That’s my story and I’m sticking to it.

My favorite Mark Bhor quote: “We don’t intend to be in the general-purpose foundry business, I don’t think the volumes ever will be huge for Intel”. Exactly! So what is Intel going to do with all that empty fab space?


Equipment Down 16% in 2012, Flat to Down in 2013

Equipment Down 16% in 2012, Flat to Down in 2013
by Bill Jewell on 12-22-2012 at 8:30 pm

Shipments of semiconductor manufacturing equipment have been trending downward since June 2012, based on combined data from SEMI for North American and European manufacturers and from SEAJ for Japanese manufacturers. The market bounced back strongly in late 2009 and in 2010 after the 2008 downturn to return to the $3 billion a month level. Bookings and billings fell in the latter half of 2011 and recovered to the $3 billion level in the first half of 2012. The latest downturn is more severe than in 2011, falling below the $2 billion a month level. However the downturn may be bottoming out, with November 2012 three-month-average bookings up 1% from October.

Total semiconductor manufacturing equipment shipments in year 2012 will be about $28 billion, down 16% from 2011, based on data through November. Recent forecasts for 2013 range from a decline of 4.4% from VLSI Research to flat from SEMI. However the largest foundry company, TSMC, is bucking the trend. According to Digitimes, TMSC plans to increase capital spending in 2013 by 8% to $9 billion.

What does this mean for the semiconductor market in 2013? Since the demise of SICAS, no accurate industry capacity utilization data is available. We at Semiconductor Intelligence estimate utilization is currently in the low to mid 80% range, down from the 90% to 95% range for 2010 and 2011. Thus the semiconductor market has room to grow in the near term without significant capacity additions. Our forecast is for 9% semiconductor market growth in 2013. Semiconductor market growth will probably accelerate in 2014 to the 10% to 15% range, requiring increased capacity. Thus the semiconductor equipment market should return to healthy growth in 2014.


Winner, Winner, Chicken Dinner!

Winner, Winner, Chicken Dinner!
by SStalnaker on 12-21-2012 at 8:00 pm

I have no idea if chicken was actually on the menu, but on December 12, Calibre RealTime picked up its thirdindustry award, this time the 2012 Elektra Award for Design Tools and Development Software from the European Electronics Industry. Calibre RealTime came out on top in a group full of prestigious finalists, including ByteSnap, Cadence, CadSoft, Synopsys, and Xilinx.


Calibre RealTime provides complete sign-offDRC/DFM feedback to the designer during custom layout creation and editing. Using an OpenAccess run-time model to enable integration with most 3rd-party custom design environments, the Calibre RealTime platform provides custom/AMS designers with immediate access to qualified Calibre rule decks (including recommended rules, pattern matching, equation-based DRC, and double patterning) during design creation. Instead of waiting for the full sign-off DRC iteration at the end of the design cycle, the designer can check each design choice against sign-off DRC, with the same fast response time as the in-design checkers. In addition, the full range of Calibre verification support is provided, including proprietary “hinting” and error identification displays that provide precise correction suggestions. This interactive editing and verification of layouts ensures DRC-clean designs in the shortest time possible, reducing overall design cycle time and providing designers with more time for design optimization, resulting in better quality and higher performance.

Calibre RealTime was previously recognized by Electronic Design with a Best award recognizing “the best technology, products and standards” of the year, and by DesignCon with its DesignVision award, honoring ” the most innovative design tools in the industry.”

For more information, visit the Calibre RealTimewebpage.




Apply within: four embedded instrumentation approaches

Apply within: four embedded instrumentation approaches
by Don Dingee on 12-21-2012 at 9:00 am

Anyone who has been around technology consortia or standards bodies will tell you that the timeline from inception to mainstream adoption of a new embedded technology is about 5 years, give or take a couple dream cycles. You can always tell the early stage, where very different concepts try to latch on to the same, simple term.

Such is the case with embedded instrumentation. At least four different post-silicon approaches have grabbed the term and are applying it in very different ways. Here at SemiWiki, our team has begun to explore the first two, but I found a couple others in Googling around.

The basic premise of all these approaches is how to see inside a complex design comprised of multiple IP blocks. All borrow from the board-level design world and the concept of IEEE 1149.x, JTAG. By gathering instrumented data from a device, passing it out on a simple low pin count interface, and connecting those interfaces to an external analyzer, a designer can gain visibility into what is going on without intrusive test points and broadside logic analysis techniques. (JTAG is also handy for device programming, but let’s stick to debug and visibility in this discussion.)

A quick snapshot of these approaches:
IJTAG, IEEE P1687
– Internal JTAG targets the problem of IP as a “black box”, especially when considering the problem of 3D and importing a block wholesale. The standard tries to reign in proprietary test methods using differing and incompatible methods into a single, cohesive framework. By standardizing the test description for an IP block, using Instrument Connectivity Language (ICL) and Procedure Description Language (PDL), a P1687-compliant block can connect into the test structure. If IP providers start adopting P1687, IP blocks plug-and-play in test development packages such as ASSET InterTech ScanWorks and Mentor Graphics Tessent.

IJTAG, Testing Large SoCs, by Paul McLellan
Mentor and NXP Demonstrate ITJAG …, by Gene Forte
One pager on IEEE P1687, by Avago Technologies

Tektronix Clarus – Tek has come at the same problem from a different direction, working on the premise that RTL is RTL, and an optimized approach which looks at an RTL design and inserts analysis instrumentation seamlessly during any EDA flow. Tektronix Clarus is built around the idea of inserting a lot of points – perhaps as many as 100,000 – but not taking a lot of real estate. Instrumentation is always collecting data, but the analyzer capability uses compression and conditional capture techniques to bring long traces on just the signals of interest. This approach is more about improved analysis and deeper visibility under functional conditions.

Tektronix articles from SemiWiki

Nexus, IEEE-ISTO 5001 – Recognizing that processors, especially multicore parts, are getting more complex and software for them is getting harder to debug, Nexus 5001 creates more robust debugging capability than just IEEE 1149.7 JTAG offers. Defining a second Nexus auxiliary port with five signals, and adding higher bandwidth capability extends the capability for operations like reading and writing memory in real-time. There is also a port controller which combines signaling for up to 8 cores. The latest revision of the standard in 2012 adds capability for the Xilinx Aurora SerDes, a fast pipe for large traces. (This is one of those standards taking a long time to get traction – membership in Nexus 5001 has waned a bit, and still seems focused on automotive MCUs with staunch backing from GM.)

Nexus 5001 forum
IPextreme Freescale IP

GOEPEL ChipVORX – GOEPEL Electronic has looked at that problem of different IP using different interfaces and tried to create a proprietary solution using adaptive models that can connect software tools to various target instruments. There’s not a lot of detail, but the GOEPEL Software Reconfigurable Instruments page has a bit more info.

Different technologies emerge to fit different needs, and these approaches show how deep the need for post-silicon visibility and multicore and complex IP debugging goes. Which of these, or other, embedded instrumentation approaches are you taking in your SoC or FPGA designs?


Formal Verification at ARM

Formal Verification at ARM
by Paul McLellan on 12-20-2012 at 4:34 pm

There are two primary microprocessor companies in the world these days: Intel and ARM. Of course there are many others but Intel is dominant on the PC desktop (including Macs) and ARM is dominant in mobile (including tablets).

One of the keynotes at last month’s Jasper User Group (JUG, not the greatest of acronyms) was by Bob Bentley of Intel, talking about how they gradually introduced formal approaches after the highly visible and highly embarrassing floating point bug in 1997. I already blogged about that here. Bob took time to emphasize that Intel doesn’t endorse vendors and so nothing he said should be taken as an endorsement of anyone. He did say that beside using commercial tools they also have some of their own internal formal tools.

Later in the day ARM talked about their use of formal verification and, in particular, Jasper. Laurent Arditii of the ARM Processor Division in Sophia Antipolis (where I lived for 6 years, highly recommended as great mix of high tech, great lifestyle and great weather) presented. In some ways this was an update since ARM presented on how they were gradually using more and more formal at last years JUG.

He characterized ARM’s use of Jasper as AAHAA. This stands for:

  • Architecture
  • bug Avoidance
  • bug Hunting
  • bug Absence
  • bug Analysis

Architecture:
Specify architectures using formal methods and verify them for completeness and correctness. For example, ARM used this approach for verifying their latest memory bus protocols. They talked about this at last years JUG.

Bug Avoidance:
Catch bugs early, during design bringup, usually before simulation testbench is ready. By catching bugs early they are easier and cheaper to fix.

Bug Hunting:
Find bugs at the block and system level. Server farm friendly (lots of runs needed).

Bug Absence:
Prove critical properties of the design. This is complex expert stuff and requires considerable user-expertise.

Bug Analysis:
Investigate late-cycle bugs. Isolate corner-case bugs observed in the field or in the lab. Confirm the correctness of any fixes.

Jasper formal is now going mainstream in ARM, which is a big advance from last year when it was starting to get traction in just some groups. Bug Avoidance and Hunting flows are leading the proliferation and give an early return on investment. They now have formal regression flows running on their server farm. Formal is no longer a niche topic. At an internal ARM conference on verification they had over 100 engineers interested. Just like Intel, a lot of the introduction of formal requires cookbooks and flows to be put together by real formal experts and then proliferated to all the various design teams.


For example, the above graphic shows two IP blocks. One was done with classical techniques (aka simulation). The other, an especially hard block to verify, was done using formal techniques with no bringup testbenches. As you can see, most of the bugs were found much earlier.


Currently Jasper is heavily used for various aspects of AAHAA. You can see in the above diagram how Jasper features map onto the AAHAA taxonomy.

In the future, apart from propagating additional formal techniques, ARM wants to use formal approaches for coverage analysis, IP-XACT validation, security validation (TrustZone), UPF/CPF (power policy) in formal


IP Scoring Using TSMC DFM Kits

IP Scoring Using TSMC DFM Kits
by Daniel Payne on 12-20-2012 at 11:00 am

Design For Manufacturing (DFM) is the art and science of making an IC design yield better in order to receive a higher ROI. Ian Smith, an AE from Mentor in the Calibre group presented a pertinent webinar, IP Scoring Using TSMC DFM Kits. I’ll provide an overview of what I learned at this webinar. Continue reading “IP Scoring Using TSMC DFM Kits”


Intel’s New Tablet Strategy Brings Ivy Bridge to the Forefront

Intel’s New Tablet Strategy Brings Ivy Bridge to the Forefront
by Ed McKernan on 12-19-2012 at 11:00 pm

In an article published this week in microprocessor report and highlighted in Barron’s, Linley Gwennap makes the argument that Intel should stay the course and fix the PC instead of trying to offset its declines with sales into the Smartphone and Tablet space. He cites that lower PC sales growth was due to a dramatic slowdown in processor performance gains (60%/year to 10-16%/year) and that the mobile market outside of Samsung and Apple is only $1.5B in size. I think his analysis on Intel’s focus is correct, yet restrictive. Intel has on several recent occasions subtly communicated a Firewall strategy that is firm when it comes to tablets while giving way in the Smartphone space, especially if it leads to Foundry Business (i.e. Apple).

What is a tablet? The first iterations defined by Apple are soon to be challenged as the Ultrabook Convertibles come down in size and price to compete with the iPAD and likewise the battery life, touchscreen and baseband functions migrate upward. What began as a low performance consumer internet device will in 2013 expand into a broad range of compute devices to satisfy the needs of consumers as well as corporations from price points starting at below $300 to above $1200 and from 7” to 13” LCD based formfactors. This is the battleground and Intel needs x86 to succeed. Its main argument for x86 will be with corporate based on its traditional strength, which is performance. Not of the Atom variety, but of the combination of a 14nm Ivy Bridge and Haswell.

Apple’s introduction of the A6X upgraded iPADs in October was a foretaste of what is to come. The tablet market is really the successor to the notebook, relying on a better balance of battery life and performance aided by SSDs and absent the high Intel processor price. Cannibalization has occurred rapidly in the consumer channel, especially following the introduction of the iPAD mini. Next up the corporate market where performance longevity drives the CFO and CTOs calculation of 3-5 year ROIs. This is why the A6X processor is key. The much-improved performance of the A6X based on a full custom design encroaches on Intel’s Ivy Bridge turf and thereby enables the iPad to benchmark well against Ultrabooks and convertibles, effectively leap frogging competitive ARM solutions and Intel’s Atom.

Publicly, Intel continues to communicate that the Atom processor that is behind in terms of schedules and process technology will catch up by 2014 on 14nm. However the market isn’t waiting and in fact is evolving faster than what was originally anticipated. As an example, I offer a little anecdote from a meeting I had with a PC customer a few weeks ago. I asked about whether there were plans to build a tablet with a fan and the engineer remarked for sure. This should tell you that the tablet market in 2013 will be an all out battle where performance will be pushed to its limits. It doesn’t take much to know that Intel will be driving this trend with the help of its customers. This is why I believe Intel’s roadmap will change dramatically in the months ahead as they cram peak performance into smaller enclosures while at the same time recreating extensive cooling systems as they did in notebooks more than a decade ago.

Currently Haswell is expected to enter the market in Q2 2013 at 22nm. It is designed to win the high-end PC market, including ultra books. It’s larger die size, though will prevent it from targeting the volume segment of the mobile markets until a 14nm shrink arrives with Broadwell in Q2 2014. The interim period from now until Q2 2014 presents a hole that Intel must fill. This is where I expect Intel to make a dramatic move with Ivy Bridge. First up will be a reduced MHz (~ 1Ghz) and reduced voltage (sub 1V) part to drop the thermal power below 10W to satisfy the cooling demands of even the smallest tablets. Following this, though I expect Intel to deliver a 14nm shrink of Ivy Bridge to reduce power further and to build an economical part with die sizes down as small as 50mm. With this part Intel will be able to cover the four corners of lowest cost, highest performance, best performance per watt and longest battery life.

Given the high yield of Ivy Bridge at 22nm and that it is the high volume runner, it would be a perfect candidate to ramp as soon as 14nm is ready. The combination of Haswell and a 14nm Ivy Bridge in late 2013 really can be seen as the two pillars that Intel will rely on to slow the movement away from the $35B legacy PC platform. As mentioned in a previous blog if the Datacenter business ramps from $10B in 2011 to $20B in 2016 then Intel can sustain a PC decline of roughly 1/3 and maintain a similar cash flow. This assumes that the Fabs are adequately loaded.

The aggressive mobile strategy outlined here, leaves open the question about Atom’s long-term viability. I believe the value of Atom to Intel has to do more with training Engineers on low power circuit design, efficiencies in developing custom SOCs and building out IP. The tablet and ultrabook mobiles are vastly different from Smartphones in the one area that counts most and that is increased space that offers vastly more degrees of freedom in which to fit the processor, memory and wireless components.

The Smartphone is quickly becoming a two horse race, where volumes will eventually reach multibillions of units. Apple and Samsung are both continuing to vertically integrate across most every component. Samsung is on a path to be completely internal, while Apple is cobbling together a supply chain that includes the “over the hill gang” of formerly great Japanese suppliers about to be cost leaders with the rapidly depreciating Yen and leading edge semiconductor giants like TSMC and most likely Intel. Having both will allow Apple to have the best economics and leverage.

Intel’s retreat from the smartphone market with the exit of Atom will be necessary for Apple to sign on. Despite rumors to the contrary, they will be happy to build Apple’s ARM processors if it comes at the expense of Samsung or TSMC. In 12 months Intel’s expanded six Fab foot print will be complete and available to support any combination of 22nm and 14nm processes (85% of the equipment in the fab works for both process nodes). Today it takes roughly three to meet the demands of the x86 market. Intel’s CapEx budget will drop dramatically in the coming year enabling Intel to use its tremendous cash flow to “buy” foundry customers. Likewise, if the Fabs remain empty, the cash flow will diminish rapidly.


Full Disclosure: I am Long AAPL, INTC, QCOM, ALTR


Cortex-A9 speed limits and PPA optimization

Cortex-A9 speed limits and PPA optimization
by Don Dingee on 12-19-2012 at 3:01 pm

We know by now that clock speeds aren’t everything when it comes to measuring the goodness of a processor. Performance has direct ties to pipeline and interconnect details, power factors into considerations of usability, and the unspoken terms of yield drive cost.

My curiosity kicked in when I looked at the recent press release from Cadence announcing they had reached 2.2 GHz on a 28nm dual-core ARM Cortex-A9 with Open Silicon. Are we reaching the limits of the Cortex-A9 in terms of clock speed growth? Or are more improvements in power, performance, and area (PPA) in store for the core?

The raw percentages quoted by Cadence in that release sound great: 10% reduction in design area, 33% reduction in clock tree power, 27% reduction in leakage power compared to an unnamed prior design flow. These new figures were achieved with a combination of the latest RTL compiler, RTL-to-GDSII core optimization, and clock concurrent optimization techniques, which are really targeted at 20nm design but are certainly applicable to less aggressive nodes.

We may be pressing the limits on what the Cortex-A9 core can do at 28nm, and there is likely only one more major speed bump to 20nm in store for the Cortex-A9. I went hunting and found several data points.

ST-Ericsson has (had?) a 2.3 GHz version, with rumbles of 2.5 GHz possible, of the dual-core NovaThor L8580 running on an FD-SOI process. It’s questionable if this device or the rest of the forward ST-Ericsson roadmap ever get to market in light of STMicro wanting to pull out of the JV, the continuing saga of Nokia attempts to recover, and the stark reality of US carriers preferring Qualcomm 4G LTE implementations.

TSMC has taped out a 3.1 GHz dual-core Cortex-A9 on their 28HPM process, which from what I can find is the unofficial record for Cortex-A9 clock speed. However, the “typical” conditions which TSMC cites leave out one detail: active cooling is required, which rules out use of a real world part at this speed in phones or tablets. The economics of yield at that speed are unclear, but they can’t be good otherwise we’d be hearing a lot more about this on processor roadmaps.

Along the lines of how much PPA optimization is possible, I went looking for another opinion and found this SoC Realization white paper from Atrenta, which discusses how PPA fits into the picture. The numbers Cadence is quoting suggest that we’re close to closing the optimization gap for the Cortex-A9, because the big-hitters in the flow have been optimized.

By back of the envelope calculations, if state-of-the-art optimization for a Cortex-A9 gives us 2.2 GHz at 28nm, a process bump to 20nm creates headroom to about 3 GHz. Reports have Apple heading to TSMC for 20nm quad-core designs, but reading between the lines of that the same concerns of power consumption and cooling exist – these chips aren’t slated for iPhones. (As I’ve said before, Apple is driving multiple roadmap lines, one on the A6 for phones, one on the A6x for tablets and presumably the long awaited Apple TV thingie, and likely a third ARM-based chip for future MacBooks probably on the 64-bit Cortex-A50 series core.)

The reason I say the Cortex-A9 likely gets only one more speed bump is explained pretty well in this article, projecting what 64-bit does for ARM-based core performance. While a lot of that is estimation, the point which I agree with is most of the energy for further EDA optimization will be put into the Cortex-A50 series. TSMC and ARM both agree that the drive for 16nm FinFET and beyond is focused on 64-bit cores.

A couple immutable rules of my own when it comes to tech:

  • 10 engineers can make anything work, once; optimization is more interesting.
  • Once something is optimized, it’s optimized, and it’s time to design the next thing.

I think we’re reaching that point on the Cortex-A9, and 3 GHz is about the end of the line for what PPA optimization and process bumps will do. With that said, what may happen is instead of going for higher clock speeds, designers drive the Cortex-A9 for lower power and take it to more embedded applications.

Punditry has its risks, like being wrong a lot or being labeled Captain Obvious. I’m thick skinned. What are your thoughts on this topic, agree or disagree?


Intel not interested by NVELO? Samsung was…

Intel not interested by NVELO? Samsung was…
by Eric Esteve on 12-19-2012 at 3:06 am

Short news came during last week-end and Linkedin was the most efficient media to learn that NVELO has been acquired. Probably very few people out of the SSD ecosystem knew about NVELO. Based in Santa Clara, the company was a spin off from Denali, privately owned and if you look at the top management, you will recognize a few name, like David Lin the VP of Product development or Sanjay Srivastava, Chairman of the board, both being part of the winning team who has sold Denali for $315M to Cadence. The product developed by NVELO? Dataplex, which is SSD cache, allowing to benefit from the Nand Flash advantages over a pure Hard Disk Drive (HDD) type of storage, but without the over cost generated by a complete SSD storage system. Pretty smart product…

There are several products in the market that use NVELO’s Dataplex software such as OCZ’s Synapse, Corsair’s Accelerator and Crucial’s Adrenaline SSDs. Dataplex is essentially an alternative to Intel’s Smart Response Technology (SRT) but with fewer limitations. For example, Dataplex is not tied to any specific chipsets, making it a viable option for AMD based setups and older systems without Intel’s SRT support. There is also no 64GB cache size limitation like in Intel’s SRT, although most of the SSDs that are bundled with Dataplex are 64GB or smaller. Whether it’s worth it to use an SSD bigger than 64GB for caching is a different question, but at least there is an option for that.

I don’t know if Intel was part of the companies bidding for NVELO, but the fact is that Samsung has completed this acquisition, on December 14, 2012, for an undisclosed amount (probably several dozen of $ million, I would say maybe $50 million, but I have absolutely no insight information). According with Samsung: “The timely integration of NVELO’s flagship storage technology into Samsung’s best-in-class SSD technologies will give Samsung customers access to an ever-evolving and more diversified portfolio of NAND storage solutions suitable for a broad range of computing platforms.”

The semiconductor industry is fast moving, this is not a scoop, and Intel is certainly at the beginning of a huge reorganization, that they have to complete if they don’t want to lose their #1 position, and maybe more than that, as missing the move from PC to mobile in general (smartphone or media tablet) could be dramatic, if the company don’t find a new sustainable source of revenue, for something like $20 Billion, in the very near future, let say by 2015 or so.

NVELO acquisition will certainly not bring such a revenue amount to Samsung, but this acquisition is one of the sign showing that the SC is heavily changing, and that Samsung is now acting more like a leader than like a challenger, that the company is still in respect with Intel. But for how long?

As a bonus, you may want to read some declaration made by Samsung and NVELO representatives: they are excited, and we are too!

“The acquisition of NVELO will enable us to extend our ability to provide SSD related storage solutions to customers. We are pleased with this transaction as the employees of NVELO share our vision to take SSD storage into the next-generation of performance and reliability,” said Young-Hyun Jun, executive vice president of Flash product & technology, Device Solutions, Samsung Electronics.

“The NVELO team is excited to join the Samsung family,” said Jiurong Cheng, president and CEO, NVELO. “We look forward to accelerating storage innovation in close cooperation with Samsung storage experts as we help to deliver fully integrated SSD solutions to the market.”

The acquisition involves all technology and personnel under NVELO, Inc. Further details of the agreement were not disclosed.

Eric Esteve from IPNEST


A Brief History of Berkeley Design Automation

A Brief History of Berkeley Design Automation
by Daniel Nenni on 12-18-2012 at 1:00 pm

Analog, mixed-signal, RF, and custom digital circuitry implemented in GHz nanometer CMOS introduce a new class of design and verification challenges that traditional transistor‑level simulators cannot adequately address. Berkeley Design Automation, Inc., (BDA) was founded in 2003 by Amit Mehrotra and Amit Narayan, UC Berkeley Ph.D. graduates within the intent of delivering a next-generation circuit-level design tools to address these coming challenges. Ravi Subramanian, another UC Berkeley Ph.D., joined shortly thereafter as CEO. Together, and with initial funding from Woodside Fund and Bessemer Venture Partners, the three embarked on launching the venture.

BDA’s initial focus was to deliver full-spectrum device noise analysis for high-performance analog and RF ICs based on Mehrotra’s fundamental research contributions. In 2004, the company introduced its initial product, PLL Noise Analyzer™, the first tool in the industry to provide closed-loop transistor-level device noise analysis tool phase-locked loops (PLLs). Through establishing the tool in several leading-edge semiconductor companies, BDA positioned itself to learn about design teams’ most significant circuit design and verification challenges—all of which required breakthrough accuracy, performance, and capacity. BDA was able to leverage the different technologies it developed for the PLL Noise Analyzer to create a new nanometer circuit verification platform that would deliver just that.

In 2006, BDA introduced the centerpiece of its new platform—the Analog FastSPICE™ circuit simulator (AFS). The product’s simple yet powerful value proposition was anchored on delivering identical results to traditional SPICE with 5×–10× higher performance and 5x-10x higher capacity that was plug-compatible with existing flows. With this breakthrough capability, design teams could solve analog/RF circuit verification problems that were previously impossible or infeasible using traditional SPICE. Thus, the new simulator category “Analog FastSPICE” was born—“Analog” accuracy with ”fastSPICE” performance.

In order to establish itself in a market crowded with legacy tools, BDA developed its now well-known engagement model of solving problems that design teams literally could not address with their current tools. Generally this consisted of simulating circuits at one or two levels of hierarchy above what they could currently simulate, running circuit simulations in days that would otherwise take weeks, and simulation circuits that otherwise ran in digital fastSPICE simulators – all with foundry certified SPICE accuracy and a drop-in use model. Word spread that SPICE wasn’t dead.

Design teams rapidly adopted AFS and encouraged BDA to deliver even more performance and capacity without compromising accuracy. They also asked BDA to extend AFS functionality to address new nanometer CMOS verification needs, including device noise impact on all noise-sensitive circuits, large post-layout simulations, complex mixed-signal simulation, and efficient characterization of global and local process variation. Leveraging its modular architecture implemented as a unified executable, BDA rapidly filled out the AFS Platform, including: multithreaded and multi-core parallel execution, >10-M element DC and transient capacity, >100K-element periodic steady state analysis, full-spectrum periodic noise analyses, full-spectrum transient noise analysis, and HDL co-simulation. All of these capabilities provide foundry certified accuracy to 20nm for both leading netlist and model formats.

The company’s technology development pipeline continues unabated with leading-edge development to breakdown remaining and emerging barriers. Most recently BDA announced breakthrough AMS capabilities in its Analog FastSPICE AMS simulator which uniquely works with any leading HDL simulator and makes mixed-signal simulation practical for everyday use.

Today, over 125 semiconductor companies around the world rely on the BDA AFS Platform and the company’s deep application expertise to meet their nanometer circuit design and verification challenges. The main circuit application areas include: high-speed I/O, PLLs/DLLs, ADCs/DACs, image sensors, transceivers, memories, and RFICs.

BDA was recognized as one of the 500 fastest growing technology companies by revenue in North America in both 2011 and 2012 by Deloitte. The company is privately held and backed by Woodside Fund, Bessemer Venture Partners, Panasonic Corp., NTT Corp., IT-Farm, and MUFJ Capital.