RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Testing ARM Cores – Mentor and ARM Lunch Seminar

Testing ARM Cores – Mentor and ARM Lunch Seminar
by Beth Martin on 07-08-2012 at 8:29 pm

If you are involved in testing memory or logic of ARM-based designs, you’ll want to attend this free seminar on July 17, 2012 in Santa Clara. Mentor Graphics and ARM have a long standing partnership, and have optimized the Mentor test products (a.k.a Tessent) for the ARM processors and memory IP.

The lunch seminar runs from 10:30-1:00 at the Santa Clara Marriott. The presenters are Richard Slobodnik of ARM, and Stephen Pateras of Mentor Graphics. They will describe the specific test solutions developed to cover memory and logic test for ARM-based designs. A newer feature is the shared bus interface where MemoryBIST controllers reside outside of the ARM core, and use the shared bus to test the memory inside the core. Blocks with a shared bus and with memories on the bus (memory clusters) have a functional interface to the bus (see the figure).

Sign up for this free ARM / Mentor Graphics Lunch Seminar now.

If you want to study up before, here are two relevant whitepapers from Mentor:
Memory Test and Repair Solution for ARM Processor Cores
High Quality Test of ARM® Cortex™-A15 Processor Using Tessent® TestKompress®


NVM IP: Novocell Semiconductor has announced an expansion of their product line

NVM IP: Novocell Semiconductor has announced an expansion of their product line
by Eric Esteve on 07-08-2012 at 3:52 am

Novocell Semiconductor, core antifuse-based OTP Smartbit™ technology was first patented in 2001 and 2002, and created a solid foundation for the first ten years,” stated Walt Novosel, President and CTO, “Since then, our customer-driven focus has led to numerous innovations in our original high reliability Smartbit-based NVM IP to best service specific system on chip (SoC) market segments. Our announcement today unveils our full line of NVM products to fully serve our customers’ needs, from 8bit register OTP, to specialty trimming and calibration OTP, to 4Mbit ultra-high density code storage and configuration OTP, to 1000x multi-time write hybrid OTP/MTP.”
Continue reading “NVM IP: Novocell Semiconductor has announced an expansion of their product line”


Intel Goes Vertical to Guarantee PC Growth

Intel Goes Vertical to Guarantee PC Growth
by Ed McKernan on 07-07-2012 at 8:30 pm

A Bloomberg article from early July caught my eye as it portends further changes in the competitive mobile market landscape. Intel is now in the business of paying Taiwanese panel suppliers to ensure the supply of touch-screen panels for PC ultrabooks. In essence it says that to win in the PC market, Intel has to mimic Apple and go more and more vertical in the supply chain. Apple’s stellar growth makes it difficult for PC manufacturers to forecast true demand out 3 to 6 months and given their minuscule profit margins they have to veer towards the conservative or face the risk of going out of business with excess inventory. Intel, like Microsoft is faced with having to control its destiny vs. the laissez faire Wintel model that has existed for 30 years.

In a previous blog, I mentioned how Microsoft may have started a Thermonuclear War with its customers (e.g. HP and Dell) when it introduced its Windows 8 Tablet – or should we say pre-announcement of Win 8 Tablets. Microsoft and Intel are showing signs that the combined profits that they derive from the PC market are too high for their customers to price at a suitable discount against the growing Apple Empire. Apple buries its O/S cost and in the case of iPADs and iPhones its CPU cost. These costs are lower than what Microsoft and Intel charge their PC OEMs and neither one wants to give in as Tablets and Ultrabooks rollout this fall. Given the strength of the iPAD growth and the now almost assured rollout of a smaller iPAD in September at $299, OEMs are concerned about what the true demand is for PCs, especially in the US and Europe.

Furthermore, Intel is in a short-term mode of keeping Ivy Bridge ULV prices high in order to force OEMs to abandon the idea of including an nVidia or AMD graphics chip in Ultrabooks because the additional cost pushes system prices out of range of what the market will pay. I expect Intel, however, to drive prices lower to capture the market in Q4 before AMD responds with a competitive solution. Currently the lowest cost Intel ULV part is over $100, which is way to high if Ultrabooks are to reach the $499 price point for high volume consumers. Over the long term though, Microsoft and Intel face unique challenges due to Apple’s growth. Both rely heavily on corporate and government purchases of PCs. Microsoft is threatened with the immediate prospect that Apple will make inroads with MacBook notebook PCs and iPADs. It is a direct hit on Microsoft’s O/S and Office Revenue stream. Microsoft has to have an immediate answer this Fall with a Windows 8 tablet, but it appears that HP and Dell can not deliver on a price that is below Apple’s iPAD.

Microsoft needs to step in and plug the hole with what will effectively be a discount on its software stack. Essentially give away the hardware to sell the Software (i.e. razor – razore blade model we are all accustomed to). Intel has a different scenario playing out and appears to be in a stronger position. In the short run it is executing to a plan that calls for cannibalizing AMD and nVidia ($10B+ revenue) with the ultrabook platform, even while PC growth slows at the expense of iPADs. The investment in Taiwan panel manufacturers will likely come with an exclusivity that bars AMD and nVidia silicon from showing up in the end product. From mid 2013 onward, Intel has to win Apple’s business as it attempts to force the whole mobile market to the leading edge process node.

Qualcomm’s misread on demand for its 28nm 4G solutions is a significant sign that the industry based its smartphone and tablet business models on an (n-1) process technology instead of being out over the ski tips. By (n-1) process, I am speaking of how many semiconductor suppliers were counting on 40nm being the volume process for mobiles this summer and fall and 28nm being a 2013 volume driver. Longer term, when Intel gets its baseband capabilities closer to Qualcomm’s, the leading edge will be determined by Intel’s latest process.Intel’s PC business model from the 1990s through today has been all about delivering processors on the leading edge. The trek they are taking to 14nm with mobile processors and Atom’s with a robust communications platform speaks to the opportunity to cannibalize Qualcomm and Broadcom. However, en route to this scenario it is looking like they will need to take a greater role in propping up the PC system supply chain.

FULL DISCLOSURE: I am Long AAPL, INTC, QCOM, ALTR


Intel’s finfets too complex and difficult?

Intel’s finfets too complex and difficult?
by Tom Dillinger on 07-07-2012 at 7:00 pm

Thanks to SemiWiki readers for the feedback and comments on the previous “Introduction to FinFET Technology” posts – very much appreciated! The next installment on FinFET modeling will be uploaded soon.

In the interim, Dan forwarded the following link to me “ Intel’s FinFETs too complicated and difficult, says Asenov, which provides some (preliminary) analysis on FinFET behavior, from recently published TEM pictures of Intel’s Ivy Bridge designs:
Continue reading “Intel’s finfets too complex and difficult?”


TSMC: Production Proven Design Services Driving SoC Innovation!

TSMC: Production Proven Design Services Driving SoC Innovation!
by Daniel Nenni on 07-06-2012 at 8:30 pm

One of the truisms of today’s disaggregated semiconductor design and manufacturing model is counter-intuitive to the do-it-yourself focus that is at the heart of every engineer. And yet, time and time again, success rewards those who understand that with today’s ever increasing complexity, it is difficult, if not impossible, to be all semiconductor things to all people.

This lesson was reinforced during a presentation at DAC 2012 from Global Unichip that focused on the services bundled into their Flexible ASIC Model[SUP]TM[/SUP] that allows semiconductor designers to focus on their core competency. What impresses me about their approach, is that GUC doesn’t insist on a hard-and-fast hand off point, but rather provides the flexibility for each company to determine where their core competency begins and ends.

GUC’s business today is being driven by high gate count, advanced technology, and low power SoC designs. Low Power is definitely key to much of today’s innovation. GUC’s low power design services start with defining specific low power library and power gating techniques. It also encompasses DVFS and AVFS services along with low power verification. But the heart of their design services lies in their low power competency, their domain IP integration and their highly sophisticated design for test (DFT).

The GUC Low Power Competency goes under the brand name PowerMagic® and covers IP that provide both internal and external power shutdown that has been proven on more than 80 designs over the past five years. Dynamic power is a big concern for low power, high performance designs. Working through different methodologies GUC has mastered the ability to efficiently perform clock gating at the architecture level. They are also experts at designing for multiple supply voltages on a single chip and dynamic voltage frequency scaling. The key to their success is the ability to design-in changes to the supply voltage and frequency based on the current processor loading.

Another design variable that drives both design quality and time-to-market success is the ability to integrate high speed interface IP into low power designs at both the chip and system level. To that end, GUC began developing and designing its own high speed interface IP, including SerDes, PCI Express 3.0, USB 3.0, DDR2/3, LPDDR2/3 and a number of others starting at 65nm. Today, that high speed interface IP portfolio covers production nodes down to 28nm and continues to shrink to 20nm. Another GUC service is customizing IP for each application to meet required specification. To achieve comprehensive domain IP integration, package influences also need to be taken into consideration.

In parallel, GUC extends its high speed design capabilities through its Design for Test (DFT) services. The key objective is to improve yield by reducing peak power during testing so as to cut down the cost of testing.

While there is much more to the GUC low-power methodology than I can blog here, the point is that low power at high performance is a difficult design challenge, one that vexes many designers and requires a new found expertise. The bottom line is that it might be worth checking out silicon proven low power designs specialists the next time you face that specific challenge.



Mind the Gap — Overcoming the processor-memory performance gap to unlock SoC performance

Mind the Gap — Overcoming the processor-memory performance gap to unlock SoC performance
by Sundar Iyer on 07-06-2012 at 3:25 pm

Remember the processor-memory gap— a situation where the processor is forced to stall while waiting for a memory operation to complete? This was largely a result of the high latency required for off chip memory accesses. Haven’t we solved that problem now with SoCs? SoCs are typically architected with their processors primarily accessing embedded memory, and accessing external memory only when absolutely necessary. However, while on-chip memory access latency is still a concern, embedded memories are also required to respond to back‐to‐back sustained access requests issued by a processor or processors. In fact, networking data pipelines and multicore processors can hammer memory with a multitude of simultaneous memory accesses to unique random addresses, and the total number of aggregated memory accesses has been dramatically increasing. So once again, system architects are up against a processor-memory gap, this time with embedded memory. And as a result, embedded memory performance has become the limiting factor in many applications (figure 1).

At Memoir Systems, we believe that the performance limitations of embedded memories are largely a result of the way that the problem has been conceptualized. In fact, we have found it is possible to improve memory performance by a factor of ten using currently available technology and standard processes. In the past, thinking about embedded memories was limited to a purely circuit- and process-oriented approach. Thus, the focus was on maximizing the number of transistors on a chip and cranking up the clock speed. This was successful up to a point, but as transistors approach atomic dimension, as an industry, we ran into fundamental physical barriers.

At Memoir we have taken an entirely new approach with ourAlgorithmic Memory technology. Algorithmic Memories operate by adding logic to existing embedded memory macros. Within the memories, algorithms intelligently read, write, and manage data in parallel using a variety of techniques such as buffering, virtualization, pipelining, and data encoding. These techniques are woven together to create a new memory that internally processes memory operations an order of magnitude faster and with guaranteed performance. This increased performance capability is made available to the system through additional memory ports such that many more memory access requests can be processed in parallel within a single clock cycle as shown in figure 2. The concept of using multi-port memories as a means of multiplying memory performance mirrors the trend of using multicore processors to increase performance over uniprocessors. In both cases, it is the parallel architecture rather than faster clock speeds that drive performance gains.

Algorithmic Memory technology is implemented as a soft RTL. The resulting solutions appear exactly as standard multi-port embedded memories. The new memories employ dozens of techniques to accelerate performance or reduce area and power requirements. However, perhaps the greatest benefits of Algorithmic Memory come not from the individual algorithms, but rather in how they are integrated into an elegant system (figure 3). In this system, the memories not only perform better, but their performance is fully deterministic. Furthermore, not only can new memories be created very rapidly, but they are also automatically exhaustively formally verified and, since they are built on existing embedded memories, no additional silicon validation is required.

Algorithmic Memory gives memory architects a powerful tool to rapidly and reliably create the exact memories they need for a given application. Most importantly, though, it empowers system architects with new techniques to overcome the processor-memory gap, and further unlock SoC performance.

Dr. Sundar Iyer

Co-Founder & CTO


Cadence at Semicon West Next Week: 2.5D and 3D

Cadence at Semicon West Next Week: 2.5D and 3D
by Paul McLellan on 07-05-2012 at 5:32 pm

Next week it is Semicon West in the Moscone Center from Tuesday to Thursday, July 10-12th. Cadence will be on a panel session during a session entitled The 2.5D and 3D packaging landscape for 2015 and beyond. This starts with 3 short keynotes:

  • 1.10pm to 1.25pm: Dr John Xie of Altera on Interposer integration through chip on wafer on substrate (CoWoS) process.
  • 1.25pm to 1.40pm: Ryusuku Otah of Fujitsu on Large SIP for computer and networking application with 2.5D, 3D structure.
  • 1.40pm to 1.55pm: Dr Huili Fu of HiSilicon Technologies on The demands and the challenges of TSV technology application in IC and system.

Then from 2.20pm to 3.30pm there is a panel session on Ecosystem and R&D collaboration. Cadence will be represented by Samta Bansal, who I talked to about Cadence’s joint work with TSMC that they announced at DAC.


As I’ve said before, I think that 2.5D (and eventually 3D) are going to be very important since it is not clear that we will be able to continue to keep on track with lithographic scaling. With double and triple patterning we can manufacture but it is very expensive and wafer prices are going up fast. EUV still looks a long way from possible commercialization and it may never get there. In the meantime, high levels of integration can be achieved with CoWoS along with the advantage of being able to mix die from different processes. We are at the early stages of this and there is still lots of work to be done, both on the EDA side but more so on the ecosystem and supply chain (who does what? when do you test? how do you ship ultra-thin silicon around without breaking it? etc). Since this is the topic of the panel session, it should be interesting to hear.

The session is free to attend if you are registered for Semicon.


Minitel Shuts Down

Minitel Shuts Down
by Paul McLellan on 07-05-2012 at 5:02 pm

When I first came to the US, one project that we had going on at VLSI Technology was an ASIC design being done by a French company called Telic. The chip would go into something called “Minitel” which the France Telecom (actually still the PTT since post and telecomunications had not yet been separated) planned to supply to most customers in the country. One function of Minitel was to allow you to look up phone numbers and the system was paid for by the fact that France Telecom would no longer need to print white-pages phone directories and distribute them.

But that wasn’t the only use. Lots of what today we would call websites came into existence allowing catalog shopping, chat-rooms, booking travel and, yes, porn (minitel rose). It was a big success. Remember this is in the early to mid 1980s, long before the Internet escaped from its roots in academia and the military. Since nobody really yet had computers at home, Minitel included a keyboard and screen. It connected over the phone lines using modem technology (at 1200 bits/second download and 75 bits/second upload).

I didn’t know it at the time, but I would end up moving to France and living there for over 5 years, and so ended up making extensive use of Minitel. The service had several tiers. You would dial 3611 for a searchable phone directory which was free. Other numbers would have nominal per-minute charge for things like catalog shopping (where the site itself wasn’t really the main business) and premium numbers for things like chat-rooms (where using the site was the business).

Of course, in the end, this turned out to be a dead end as general purpose computers became widespread, the Internet became commercial and, eventually, we all have broadband. There has been a lot of debate about whether Minitel helped or hindered Internet penetration in France. Since there were so many primitive terminals in use, people didn’t need to buy a computer and they could still have many of the benefits of the Internet. But eventually as people had computers anyway, and as data speeds went to 56 kbit/second modems and then broadband, it was increasingly obsolete. Last week, France Telecom finally retired the service on June 30th.

Being first is sometimes a hindrance compared to waiting. The London Underground is primitive compared to many subway systems (especially new ones in Asia) because it was partially built in the 1860s before people could tunnel through hard rock effectively, so it is unnecessarily windy, and the tube diameter is comparatively small (so small that there is no room for air-conditioning on the trains).

EDA startup companies often try and commercialize some technology before the market really requires it, and this typically means that the company runs out of money or considerably dilutes all the founders. Often it is a different company that finally finds success in the market.

With the success of the Mac it is easy to forget that Apple’s first attempt to build a GUI-based computer was Lisa, which was a failure (not to mention Xerox’s own attempt, the Star, even earlier and more expensive).

So being first isn’t always best. It’s the second mouse that gets the cheese.


IC Design at Novocell Semiconductor

IC Design at Novocell Semiconductor
by Daniel Payne on 07-05-2012 at 12:09 pm

In my circuit design past I did DRAM work at Intel, so I was interested in learning more about Novocell Semiconductor and their design of One Time Programmable (OTP) IP. Walter Novosell is the President/CTO of Novocell and talked with me by phone on Thursday. Continue reading “IC Design at Novocell Semiconductor”


Managing Differences with Schematic-Based IC Design

Managing Differences with Schematic-Based IC Design
by Daniel Payne on 07-02-2012 at 2:41 pm

At DAC in June I didn’t get a chance to visit ClioSoft for a product update so instead I read their white paper this week, “The Power of Visual Diff for Schematics & Layouts“. My background is transistor-level IC design so anything with schematics is familiar and interesting.

The Challenge
Hand-crafted chip designs provide the highest performance and give the designer greatest control over silicon IP area and specs, however when you are part of a team of designers there is a need to communicate with other team members on what has changed on your schematic since the last version.

Approaches for Text Diff
When I started doing DRAM circuit design in the 1970’s we used colored highlighters during the compare of schematics versus the netlist. Word processors like the Microsoft Office suite added text comparison features so that as you worked on a document a history of what just changed could be seen.

Unix users have the difficulty, even my MacBook Pro today has diff on it for the occasional times that I use the terminal and need to compare text files.

In EDA you can even use an LVS (Layout Versus Schematic) tool like Calibre to compare two versions of a schematic or layout netlist and get some idea of what has changed.

Approaches for Graphic Diff
With a schematic-based design you need a visual way to markup your schematics or IC layout to see what has changed since the last rev. You could try several approaches for tracking schematic or layout changes:

  • Netlist your schematics into EDIF or SPICE files
  • On IC Layout do an XOR of the GDS versions

The netlist approach works only on Text, so you cannot really see what changed which is more intuitive.

The XOR approach shows you visually what changed on the layout, but not the difference on a schematic level which is more intuitive to a circuit designer.

The Visual Design Diff Approach
ClioSoft has created a visual diff tool that works in a Cadence Virtuoso design flow to quickly highlight what has changed on your schematic since the last rev.

You just click a new menu choice in Virtuoso under Design Manager to see changes to:

  • Nets
  • Instances
  • Layers
  • Labels
  • Properties

This new class of EDA tool can be used by both the circuit designer or the layout designer in determining what has changed between versions of either a schematic or layout. This kind of automation will eliminate hours and even days worth of manual work compared to the old way of managing changes manually.

In June 2011 ClioSoft added support for hierarchy in the visual diff tool, so now you can traverse all the levels of hierarchy of your designs and see the differences.

Also Read

ClioSoft Update 2012!

Hardware Configuration Management at DAC 2012

IC Layout Design at Qualcomm